2026-04-04 00:00:09.561862 | Job console starting 2026-04-04 00:00:09.587237 | Updating git repos 2026-04-04 00:00:09.697489 | Cloning repos into workspace 2026-04-04 00:00:10.097619 | Restoring repo states 2026-04-04 00:00:10.182813 | Merging changes 2026-04-04 00:00:10.182864 | Checking out repos 2026-04-04 00:00:10.737634 | Preparing playbooks 2026-04-04 00:00:11.621456 | Running Ansible setup 2026-04-04 00:00:19.740156 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-04 00:00:21.931619 | 2026-04-04 00:00:21.931735 | PLAY [Base pre] 2026-04-04 00:00:21.956584 | 2026-04-04 00:00:21.956694 | TASK [Setup log path fact] 2026-04-04 00:00:21.998659 | orchestrator | ok 2026-04-04 00:00:22.030319 | 2026-04-04 00:00:22.030441 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-04 00:00:22.093968 | orchestrator | ok 2026-04-04 00:00:22.126907 | 2026-04-04 00:00:22.127034 | TASK [emit-job-header : Print job information] 2026-04-04 00:00:22.195007 | # Job Information 2026-04-04 00:00:22.195166 | Ansible Version: 2.16.14 2026-04-04 00:00:22.195195 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-04-04 00:00:22.195277 | Pipeline: periodic-midnight 2026-04-04 00:00:22.195304 | Executor: 521e9411259a 2026-04-04 00:00:22.195322 | Triggered by: https://github.com/osism/testbed 2026-04-04 00:00:22.195341 | Event ID: d88433eb18ea4d9ba93b69c4517821a5 2026-04-04 00:00:22.200789 | 2026-04-04 00:00:22.200878 | LOOP [emit-job-header : Print node information] 2026-04-04 00:00:22.471607 | orchestrator | ok: 2026-04-04 00:00:22.471783 | orchestrator | # Node Information 2026-04-04 00:00:22.471814 | orchestrator | Inventory Hostname: orchestrator 2026-04-04 00:00:22.471836 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-04 00:00:22.471855 | orchestrator | Username: zuul-testbed04 2026-04-04 00:00:22.471873 | orchestrator | Distro: Debian 12.13 2026-04-04 00:00:22.471893 | orchestrator | Provider: static-testbed 2026-04-04 00:00:22.471911 | orchestrator | Region: 2026-04-04 00:00:22.471928 | orchestrator | Label: testbed-orchestrator 2026-04-04 00:00:22.471945 | orchestrator | Product Name: OpenStack Nova 2026-04-04 00:00:22.471961 | orchestrator | Interface IP: 81.163.193.140 2026-04-04 00:00:22.491564 | 2026-04-04 00:00:22.491665 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-04 00:00:23.582159 | orchestrator -> localhost | changed 2026-04-04 00:00:23.588442 | 2026-04-04 00:00:23.588531 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-04 00:00:25.992800 | orchestrator -> localhost | changed 2026-04-04 00:00:26.004215 | 2026-04-04 00:00:26.004313 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-04 00:00:26.647521 | orchestrator -> localhost | ok 2026-04-04 00:00:26.661165 | 2026-04-04 00:00:26.661280 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-04 00:00:26.701871 | orchestrator | ok 2026-04-04 00:00:26.743147 | orchestrator | included: /var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-04 00:00:26.757325 | 2026-04-04 00:00:26.757419 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-04 00:00:30.538434 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-04 00:00:30.538602 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/5467f274f7104821808ed5960c284cbe_id_rsa 2026-04-04 00:00:30.538634 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/5467f274f7104821808ed5960c284cbe_id_rsa.pub 2026-04-04 00:00:30.538656 | orchestrator -> localhost | The key fingerprint is: 2026-04-04 00:00:30.538678 | orchestrator -> localhost | SHA256:EotKBilEq8xTh+gyzyBMQUh+49gBIwh7Zv8Sm96FkvE zuul-build-sshkey 2026-04-04 00:00:30.538697 | orchestrator -> localhost | The key's randomart image is: 2026-04-04 00:00:30.538724 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-04 00:00:30.538743 | orchestrator -> localhost | |XO | 2026-04-04 00:00:30.538761 | orchestrator -> localhost | |*o* . | 2026-04-04 00:00:30.538777 | orchestrator -> localhost | |==+* .. | 2026-04-04 00:00:30.538794 | orchestrator -> localhost | |O=*.+. o | 2026-04-04 00:00:30.538811 | orchestrator -> localhost | |*Boo= o S | 2026-04-04 00:00:30.539030 | orchestrator -> localhost | |oB.. O o | 2026-04-04 00:00:30.539081 | orchestrator -> localhost | | + * E . | 2026-04-04 00:00:30.539102 | orchestrator -> localhost | | . + . | 2026-04-04 00:00:30.539122 | orchestrator -> localhost | | . . | 2026-04-04 00:00:30.539140 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-04 00:00:30.539188 | orchestrator -> localhost | ok: Runtime: 0:00:02.866062 2026-04-04 00:00:30.558781 | 2026-04-04 00:00:30.559314 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-04 00:00:30.601637 | orchestrator | ok 2026-04-04 00:00:30.616950 | orchestrator | included: /var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-04 00:00:30.657521 | 2026-04-04 00:00:30.657619 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-04 00:00:30.712112 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:30.719519 | 2026-04-04 00:00:30.719625 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-04 00:00:32.094688 | orchestrator | changed 2026-04-04 00:00:32.101417 | 2026-04-04 00:00:32.101500 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-04 00:00:32.450103 | orchestrator | ok 2026-04-04 00:00:32.456005 | 2026-04-04 00:00:32.456105 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-04 00:00:32.953328 | orchestrator | ok 2026-04-04 00:00:32.958061 | 2026-04-04 00:00:32.958149 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-04 00:00:33.469029 | orchestrator | ok 2026-04-04 00:00:33.473879 | 2026-04-04 00:00:33.473961 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-04 00:00:33.536772 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:33.542256 | 2026-04-04 00:00:33.542338 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-04 00:00:34.661241 | orchestrator -> localhost | changed 2026-04-04 00:00:34.672371 | 2026-04-04 00:00:34.672463 | TASK [add-build-sshkey : Add back temp key] 2026-04-04 00:00:35.497502 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/5467f274f7104821808ed5960c284cbe_id_rsa (zuul-build-sshkey) 2026-04-04 00:00:35.497710 | orchestrator -> localhost | ok: Runtime: 0:00:00.018461 2026-04-04 00:00:35.513542 | 2026-04-04 00:00:35.513640 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-04 00:00:36.252410 | orchestrator | ok 2026-04-04 00:00:36.257245 | 2026-04-04 00:00:36.257325 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-04 00:00:36.290384 | orchestrator | skipping: Conditional result was False 2026-04-04 00:00:36.404604 | 2026-04-04 00:00:36.407491 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-04 00:00:37.048413 | orchestrator | ok 2026-04-04 00:00:37.069799 | 2026-04-04 00:00:37.069899 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-04 00:00:37.123146 | orchestrator | ok 2026-04-04 00:00:37.129762 | 2026-04-04 00:00:37.129849 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-04 00:00:37.895274 | orchestrator -> localhost | ok 2026-04-04 00:00:37.901245 | 2026-04-04 00:00:37.901330 | TASK [validate-host : Collect information about the host] 2026-04-04 00:00:39.721172 | orchestrator | ok 2026-04-04 00:00:39.756772 | 2026-04-04 00:00:39.756881 | TASK [validate-host : Sanitize hostname] 2026-04-04 00:00:39.840517 | orchestrator | ok 2026-04-04 00:00:39.844882 | 2026-04-04 00:00:39.844961 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-04 00:00:41.310649 | orchestrator -> localhost | changed 2026-04-04 00:00:41.316086 | 2026-04-04 00:00:41.316176 | TASK [validate-host : Collect information about zuul worker] 2026-04-04 00:00:41.815839 | orchestrator | ok 2026-04-04 00:00:41.820864 | 2026-04-04 00:00:41.820956 | TASK [validate-host : Write out all zuul information for each host] 2026-04-04 00:00:43.687321 | orchestrator -> localhost | changed 2026-04-04 00:00:43.696757 | 2026-04-04 00:00:43.696847 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-04 00:00:43.981271 | orchestrator | ok 2026-04-04 00:00:43.988810 | 2026-04-04 00:00:43.988896 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-04 00:02:07.872405 | orchestrator | changed: 2026-04-04 00:02:07.875255 | orchestrator | .d..t...... src/ 2026-04-04 00:02:07.875330 | orchestrator | .d..t...... src/github.com/ 2026-04-04 00:02:07.875358 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-04 00:02:07.875380 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-04 00:02:07.875400 | orchestrator | RedHat.yml 2026-04-04 00:02:07.890510 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-04 00:02:07.890528 | orchestrator | RedHat.yml 2026-04-04 00:02:07.890579 | orchestrator | = 1.53.0"... 2026-04-04 00:02:19.918447 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-04-04 00:02:19.936876 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-04 00:02:20.477015 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-04 00:02:21.089031 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-04 00:02:21.148347 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-04 00:02:21.566975 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-04 00:02:21.674392 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-04 00:02:22.507463 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-04 00:02:22.507548 | orchestrator | 2026-04-04 00:02:22.507556 | orchestrator | Providers are signed by their developers. 2026-04-04 00:02:22.507561 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-04 00:02:22.507574 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-04 00:02:22.507608 | orchestrator | 2026-04-04 00:02:22.507614 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-04 00:02:22.507630 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-04 00:02:22.507634 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-04 00:02:22.507646 | orchestrator | you run "tofu init" in the future. 2026-04-04 00:02:22.508108 | orchestrator | 2026-04-04 00:02:22.508153 | orchestrator | OpenTofu has been successfully initialized! 2026-04-04 00:02:22.508180 | orchestrator | 2026-04-04 00:02:22.508185 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-04 00:02:22.508189 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-04 00:02:22.508194 | orchestrator | should now work. 2026-04-04 00:02:22.508198 | orchestrator | 2026-04-04 00:02:22.508202 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-04 00:02:22.508206 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-04 00:02:22.508219 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-04 00:02:22.688436 | orchestrator | Created and switched to workspace "ci"! 2026-04-04 00:02:22.688511 | orchestrator | 2026-04-04 00:02:22.688523 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-04 00:02:22.688534 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-04 00:02:22.688544 | orchestrator | for this configuration. 2026-04-04 00:02:22.818844 | orchestrator | ci.auto.tfvars 2026-04-04 00:02:22.822154 | orchestrator | default_custom.tf 2026-04-04 00:02:24.562206 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-04 00:02:25.135806 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-04 00:02:25.446511 | orchestrator | 2026-04-04 00:02:25.446602 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-04 00:02:25.446617 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-04 00:02:25.446665 | orchestrator | + create 2026-04-04 00:02:25.446700 | orchestrator | <= read (data resources) 2026-04-04 00:02:25.446731 | orchestrator | 2026-04-04 00:02:25.446743 | orchestrator | OpenTofu will perform the following actions: 2026-04-04 00:02:25.447017 | orchestrator | 2026-04-04 00:02:25.447052 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-04 00:02:25.447066 | orchestrator | # (config refers to values not yet known) 2026-04-04 00:02:25.447076 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-04 00:02:25.447087 | orchestrator | + checksum = (known after apply) 2026-04-04 00:02:25.447097 | orchestrator | + created_at = (known after apply) 2026-04-04 00:02:25.447107 | orchestrator | + file = (known after apply) 2026-04-04 00:02:25.447117 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.447152 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.447163 | orchestrator | + min_disk_gb = (known after apply) 2026-04-04 00:02:25.447173 | orchestrator | + min_ram_mb = (known after apply) 2026-04-04 00:02:25.447183 | orchestrator | + most_recent = true 2026-04-04 00:02:25.447193 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.447203 | orchestrator | + protected = (known after apply) 2026-04-04 00:02:25.447212 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.447225 | orchestrator | + schema = (known after apply) 2026-04-04 00:02:25.447235 | orchestrator | + size_bytes = (known after apply) 2026-04-04 00:02:25.447245 | orchestrator | + tags = (known after apply) 2026-04-04 00:02:25.447255 | orchestrator | + updated_at = (known after apply) 2026-04-04 00:02:25.447265 | orchestrator | } 2026-04-04 00:02:25.447425 | orchestrator | 2026-04-04 00:02:25.447450 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-04 00:02:25.447459 | orchestrator | # (config refers to values not yet known) 2026-04-04 00:02:25.447468 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-04 00:02:25.447476 | orchestrator | + checksum = (known after apply) 2026-04-04 00:02:25.447484 | orchestrator | + created_at = (known after apply) 2026-04-04 00:02:25.447492 | orchestrator | + file = (known after apply) 2026-04-04 00:02:25.447500 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.447508 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.447516 | orchestrator | + min_disk_gb = (known after apply) 2026-04-04 00:02:25.447523 | orchestrator | + min_ram_mb = (known after apply) 2026-04-04 00:02:25.447531 | orchestrator | + most_recent = true 2026-04-04 00:02:25.447539 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.447547 | orchestrator | + protected = (known after apply) 2026-04-04 00:02:25.447555 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.447563 | orchestrator | + schema = (known after apply) 2026-04-04 00:02:25.447571 | orchestrator | + size_bytes = (known after apply) 2026-04-04 00:02:25.447579 | orchestrator | + tags = (known after apply) 2026-04-04 00:02:25.447587 | orchestrator | + updated_at = (known after apply) 2026-04-04 00:02:25.447595 | orchestrator | } 2026-04-04 00:02:25.447731 | orchestrator | 2026-04-04 00:02:25.447755 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-04 00:02:25.447765 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-04 00:02:25.447773 | orchestrator | + content = (known after apply) 2026-04-04 00:02:25.447781 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:25.447805 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:25.447813 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:25.447821 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:25.447829 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:25.447837 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:25.447845 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:25.447854 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:25.447862 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-04 00:02:25.447869 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.447877 | orchestrator | } 2026-04-04 00:02:25.448008 | orchestrator | 2026-04-04 00:02:25.448033 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-04 00:02:25.448042 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-04 00:02:25.448050 | orchestrator | + content = (known after apply) 2026-04-04 00:02:25.448058 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:25.448066 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:25.448074 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:25.448082 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:25.448090 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:25.448106 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:25.448114 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:25.448122 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:25.448138 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-04 00:02:25.448146 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.448154 | orchestrator | } 2026-04-04 00:02:25.448285 | orchestrator | 2026-04-04 00:02:25.448310 | orchestrator | # local_file.inventory will be created 2026-04-04 00:02:25.448319 | orchestrator | + resource "local_file" "inventory" { 2026-04-04 00:02:25.448327 | orchestrator | + content = (known after apply) 2026-04-04 00:02:25.448335 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:25.448343 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:25.448351 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:25.448359 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:25.448368 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:25.448376 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:25.448384 | orchestrator | + directory_permission = "0777" 2026-04-04 00:02:25.448392 | orchestrator | + file_permission = "0644" 2026-04-04 00:02:25.448399 | orchestrator | + filename = "inventory.ci" 2026-04-04 00:02:25.448407 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.448415 | orchestrator | } 2026-04-04 00:02:25.448548 | orchestrator | 2026-04-04 00:02:25.448574 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-04 00:02:25.448583 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-04 00:02:25.448591 | orchestrator | + content = (sensitive value) 2026-04-04 00:02:25.448599 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-04 00:02:25.448607 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-04 00:02:25.448615 | orchestrator | + content_md5 = (known after apply) 2026-04-04 00:02:25.448622 | orchestrator | + content_sha1 = (known after apply) 2026-04-04 00:02:25.448630 | orchestrator | + content_sha256 = (known after apply) 2026-04-04 00:02:25.448638 | orchestrator | + content_sha512 = (known after apply) 2026-04-04 00:02:25.448646 | orchestrator | + directory_permission = "0700" 2026-04-04 00:02:25.448654 | orchestrator | + file_permission = "0600" 2026-04-04 00:02:25.448662 | orchestrator | + filename = ".id_rsa.ci" 2026-04-04 00:02:25.448670 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.448677 | orchestrator | } 2026-04-04 00:02:25.448719 | orchestrator | 2026-04-04 00:02:25.448743 | orchestrator | # null_resource.node_semaphore will be created 2026-04-04 00:02:25.448752 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-04 00:02:25.448760 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.448768 | orchestrator | } 2026-04-04 00:02:25.448931 | orchestrator | 2026-04-04 00:02:25.448954 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-04 00:02:25.448962 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-04 00:02:25.448969 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.448975 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.448982 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.448989 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.448996 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.449002 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-04 00:02:25.449009 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.449016 | orchestrator | + size = 80 2026-04-04 00:02:25.449023 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.449030 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.449036 | orchestrator | } 2026-04-04 00:02:25.449138 | orchestrator | 2026-04-04 00:02:25.449159 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-04 00:02:25.449167 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.449174 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.449180 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.449187 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.449200 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.449207 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.449213 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-04 00:02:25.449220 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.449227 | orchestrator | + size = 80 2026-04-04 00:02:25.449233 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.449240 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.449247 | orchestrator | } 2026-04-04 00:02:25.449348 | orchestrator | 2026-04-04 00:02:25.449368 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-04 00:02:25.449376 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.449383 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.449390 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.449396 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.449403 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.449410 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.449416 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-04 00:02:25.449423 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.449430 | orchestrator | + size = 80 2026-04-04 00:02:25.449436 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.449443 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.449450 | orchestrator | } 2026-04-04 00:02:25.449546 | orchestrator | 2026-04-04 00:02:25.449567 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-04 00:02:25.449577 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.449588 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.449597 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.449604 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.449611 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.449617 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.449624 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-04 00:02:25.449631 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.449637 | orchestrator | + size = 80 2026-04-04 00:02:25.449649 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.449656 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.449662 | orchestrator | } 2026-04-04 00:02:25.449762 | orchestrator | 2026-04-04 00:02:25.449781 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-04 00:02:25.449821 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.449829 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.449836 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.449843 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.449849 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.449856 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.449863 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-04 00:02:25.449869 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.449876 | orchestrator | + size = 80 2026-04-04 00:02:25.449883 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.449890 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.449896 | orchestrator | } 2026-04-04 00:02:25.449999 | orchestrator | 2026-04-04 00:02:25.450042 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-04 00:02:25.450051 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.450058 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.450065 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.450071 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.450085 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.450091 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.450098 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-04 00:02:25.450105 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.450112 | orchestrator | + size = 80 2026-04-04 00:02:25.450118 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.450125 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.450132 | orchestrator | } 2026-04-04 00:02:25.450235 | orchestrator | 2026-04-04 00:02:25.450254 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-04 00:02:25.450262 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-04 00:02:25.450268 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.450275 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.450282 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.450288 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.450295 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.450302 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-04 00:02:25.450308 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.450315 | orchestrator | + size = 80 2026-04-04 00:02:25.450322 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.450328 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.450335 | orchestrator | } 2026-04-04 00:02:25.450436 | orchestrator | 2026-04-04 00:02:25.450456 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-04 00:02:25.450464 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.450471 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.450478 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.450484 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.450491 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.450498 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-04 00:02:25.450505 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.450511 | orchestrator | + size = 20 2026-04-04 00:02:25.450518 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.450525 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.450532 | orchestrator | } 2026-04-04 00:02:25.450625 | orchestrator | 2026-04-04 00:02:25.450644 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-04 00:02:25.450652 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.450663 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.450674 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.450685 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.450695 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.450712 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-04 00:02:25.450724 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.450735 | orchestrator | + size = 20 2026-04-04 00:02:25.450746 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.450757 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.450767 | orchestrator | } 2026-04-04 00:02:25.450985 | orchestrator | 2026-04-04 00:02:25.451025 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-04 00:02:25.451038 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.451049 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.451060 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.451071 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.451082 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.451092 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-04 00:02:25.451104 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.451127 | orchestrator | + size = 20 2026-04-04 00:02:25.451139 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.451151 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.451163 | orchestrator | } 2026-04-04 00:02:25.451331 | orchestrator | 2026-04-04 00:02:25.451364 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-04 00:02:25.451376 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.451386 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.451397 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.451407 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.451427 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.451438 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-04 00:02:25.451449 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.451459 | orchestrator | + size = 20 2026-04-04 00:02:25.451470 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.451481 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.451491 | orchestrator | } 2026-04-04 00:02:25.451648 | orchestrator | 2026-04-04 00:02:25.451682 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-04 00:02:25.451693 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.451703 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.451712 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.451723 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.451734 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.451743 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-04 00:02:25.451752 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.451761 | orchestrator | + size = 20 2026-04-04 00:02:25.451772 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.451783 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.451815 | orchestrator | } 2026-04-04 00:02:25.451985 | orchestrator | 2026-04-04 00:02:25.452014 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-04 00:02:25.452025 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.452037 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.452048 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.452059 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.452070 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.452080 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-04 00:02:25.452089 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.452098 | orchestrator | + size = 20 2026-04-04 00:02:25.452106 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.452116 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.452126 | orchestrator | } 2026-04-04 00:02:25.452272 | orchestrator | 2026-04-04 00:02:25.452305 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-04 00:02:25.452315 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.452324 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.452334 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.452345 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.452355 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.452365 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-04 00:02:25.452375 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.452386 | orchestrator | + size = 20 2026-04-04 00:02:25.452397 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.452407 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.452417 | orchestrator | } 2026-04-04 00:02:25.452588 | orchestrator | 2026-04-04 00:02:25.452622 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-04 00:02:25.452635 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.452657 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.452668 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.452679 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.452690 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.452701 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-04 00:02:25.452711 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.452723 | orchestrator | + size = 20 2026-04-04 00:02:25.452733 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.452744 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.452755 | orchestrator | } 2026-04-04 00:02:25.452943 | orchestrator | 2026-04-04 00:02:25.452977 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-04 00:02:25.452989 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-04 00:02:25.452999 | orchestrator | + attachment = (known after apply) 2026-04-04 00:02:25.453010 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.453021 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.453031 | orchestrator | + metadata = (known after apply) 2026-04-04 00:02:25.453042 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-04 00:02:25.453053 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.453063 | orchestrator | + size = 20 2026-04-04 00:02:25.453074 | orchestrator | + volume_retype_policy = "never" 2026-04-04 00:02:25.453084 | orchestrator | + volume_type = "ssd" 2026-04-04 00:02:25.453094 | orchestrator | } 2026-04-04 00:02:25.453569 | orchestrator | 2026-04-04 00:02:25.453603 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-04 00:02:25.453615 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-04 00:02:25.453627 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.453638 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.453651 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.453663 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.453674 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.453684 | orchestrator | + config_drive = true 2026-04-04 00:02:25.453703 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.453714 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.453726 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-04 00:02:25.453737 | orchestrator | + force_delete = false 2026-04-04 00:02:25.453748 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.453759 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.453771 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.453782 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.453820 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.453832 | orchestrator | + name = "testbed-manager" 2026-04-04 00:02:25.453842 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.453853 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.453864 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.453874 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.453885 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.453896 | orchestrator | + user_data = (sensitive value) 2026-04-04 00:02:25.453907 | orchestrator | 2026-04-04 00:02:25.453918 | orchestrator | + block_device { 2026-04-04 00:02:25.453929 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.453940 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.453951 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.453962 | orchestrator | + multiattach = false 2026-04-04 00:02:25.453973 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.453983 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.454005 | orchestrator | } 2026-04-04 00:02:25.454015 | orchestrator | 2026-04-04 00:02:25.454068 | orchestrator | + network { 2026-04-04 00:02:25.454076 | orchestrator | + access_network = false 2026-04-04 00:02:25.454085 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.454094 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.454102 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.454110 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.454119 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.454129 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.454139 | orchestrator | } 2026-04-04 00:02:25.454149 | orchestrator | } 2026-04-04 00:02:25.454167 | orchestrator | 2026-04-04 00:02:25.454178 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-04 00:02:25.454188 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.454198 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.454207 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.454217 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.454227 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.454236 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.454246 | orchestrator | + config_drive = true 2026-04-04 00:02:25.454257 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.454267 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.454277 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.454287 | orchestrator | + force_delete = false 2026-04-04 00:02:25.454297 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.454308 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.454318 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.454328 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.454338 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.454348 | orchestrator | + name = "testbed-node-0" 2026-04-04 00:02:25.454357 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.454366 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.454376 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.454386 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.454396 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.454406 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.454417 | orchestrator | 2026-04-04 00:02:25.454428 | orchestrator | + block_device { 2026-04-04 00:02:25.454438 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.454448 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.454457 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.454466 | orchestrator | + multiattach = false 2026-04-04 00:02:25.454476 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.454486 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.454495 | orchestrator | } 2026-04-04 00:02:25.454504 | orchestrator | 2026-04-04 00:02:25.454513 | orchestrator | + network { 2026-04-04 00:02:25.454521 | orchestrator | + access_network = false 2026-04-04 00:02:25.454530 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.454538 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.454547 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.454555 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.454564 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.454572 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.454581 | orchestrator | } 2026-04-04 00:02:25.454590 | orchestrator | } 2026-04-04 00:02:25.454599 | orchestrator | 2026-04-04 00:02:25.454608 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-04 00:02:25.454617 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.454625 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.454648 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.454657 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.454665 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.454674 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.454682 | orchestrator | + config_drive = true 2026-04-04 00:02:25.454690 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.454699 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.454707 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.454716 | orchestrator | + force_delete = false 2026-04-04 00:02:25.454725 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.454734 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.454743 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.454752 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.454761 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.454770 | orchestrator | + name = "testbed-node-1" 2026-04-04 00:02:25.454779 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.454808 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.454818 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.454827 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.454837 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.454854 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.454864 | orchestrator | 2026-04-04 00:02:25.454873 | orchestrator | + block_device { 2026-04-04 00:02:25.454882 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.454891 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.454900 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.454910 | orchestrator | + multiattach = false 2026-04-04 00:02:25.454919 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.454929 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.454938 | orchestrator | } 2026-04-04 00:02:25.454948 | orchestrator | 2026-04-04 00:02:25.454958 | orchestrator | + network { 2026-04-04 00:02:25.454967 | orchestrator | + access_network = false 2026-04-04 00:02:25.454976 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.454985 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.454993 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.455002 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.455011 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.455021 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455029 | orchestrator | } 2026-04-04 00:02:25.455038 | orchestrator | } 2026-04-04 00:02:25.455047 | orchestrator | 2026-04-04 00:02:25.455056 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-04 00:02:25.455065 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.455074 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.455083 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.455094 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.455103 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.455113 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.455121 | orchestrator | + config_drive = true 2026-04-04 00:02:25.455140 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.455149 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.455158 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.455165 | orchestrator | + force_delete = false 2026-04-04 00:02:25.455170 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.455175 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.455181 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.455194 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.455199 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.455205 | orchestrator | + name = "testbed-node-2" 2026-04-04 00:02:25.455210 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.455216 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.455221 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.455226 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.455232 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.455237 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.455243 | orchestrator | 2026-04-04 00:02:25.455248 | orchestrator | + block_device { 2026-04-04 00:02:25.455254 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.455259 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.455264 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.455270 | orchestrator | + multiattach = false 2026-04-04 00:02:25.455275 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.455280 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455286 | orchestrator | } 2026-04-04 00:02:25.455291 | orchestrator | 2026-04-04 00:02:25.455297 | orchestrator | + network { 2026-04-04 00:02:25.455302 | orchestrator | + access_network = false 2026-04-04 00:02:25.455307 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.455313 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.455318 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.455324 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.455329 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.455334 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455340 | orchestrator | } 2026-04-04 00:02:25.455345 | orchestrator | } 2026-04-04 00:02:25.455351 | orchestrator | 2026-04-04 00:02:25.455361 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-04 00:02:25.455367 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.455373 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.455378 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.455383 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.455389 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.455394 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.455400 | orchestrator | + config_drive = true 2026-04-04 00:02:25.455405 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.455410 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.455416 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.455421 | orchestrator | + force_delete = false 2026-04-04 00:02:25.455427 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.455432 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.455438 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.455443 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.455448 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.455454 | orchestrator | + name = "testbed-node-3" 2026-04-04 00:02:25.455459 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.455465 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.455470 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.455475 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.455481 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.455486 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.455492 | orchestrator | 2026-04-04 00:02:25.455497 | orchestrator | + block_device { 2026-04-04 00:02:25.455503 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.455508 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.455514 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.455523 | orchestrator | + multiattach = false 2026-04-04 00:02:25.455529 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.455534 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455539 | orchestrator | } 2026-04-04 00:02:25.455545 | orchestrator | 2026-04-04 00:02:25.455550 | orchestrator | + network { 2026-04-04 00:02:25.455556 | orchestrator | + access_network = false 2026-04-04 00:02:25.455561 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.455567 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.455572 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.455577 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.455583 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.455588 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455593 | orchestrator | } 2026-04-04 00:02:25.455599 | orchestrator | } 2026-04-04 00:02:25.455604 | orchestrator | 2026-04-04 00:02:25.455610 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-04 00:02:25.455615 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.455621 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.455626 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.455632 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.455637 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.455642 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.455648 | orchestrator | + config_drive = true 2026-04-04 00:02:25.455653 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.455659 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.455664 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.455669 | orchestrator | + force_delete = false 2026-04-04 00:02:25.455675 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.455680 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.455685 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.455691 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.455696 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.455702 | orchestrator | + name = "testbed-node-4" 2026-04-04 00:02:25.455711 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.455716 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.455722 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.455727 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.455733 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.455739 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.455744 | orchestrator | 2026-04-04 00:02:25.455749 | orchestrator | + block_device { 2026-04-04 00:02:25.455755 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.455760 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.455766 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.455771 | orchestrator | + multiattach = false 2026-04-04 00:02:25.455777 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.455782 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455878 | orchestrator | } 2026-04-04 00:02:25.455906 | orchestrator | 2026-04-04 00:02:25.455912 | orchestrator | + network { 2026-04-04 00:02:25.455917 | orchestrator | + access_network = false 2026-04-04 00:02:25.455923 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.455928 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.455934 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.455939 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.455945 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.455950 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.455956 | orchestrator | } 2026-04-04 00:02:25.455961 | orchestrator | } 2026-04-04 00:02:25.455973 | orchestrator | 2026-04-04 00:02:25.455978 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-04 00:02:25.455984 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-04 00:02:25.455989 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-04 00:02:25.455994 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-04 00:02:25.456000 | orchestrator | + all_metadata = (known after apply) 2026-04-04 00:02:25.456005 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.456010 | orchestrator | + availability_zone = "nova" 2026-04-04 00:02:25.456025 | orchestrator | + config_drive = true 2026-04-04 00:02:25.456031 | orchestrator | + created = (known after apply) 2026-04-04 00:02:25.456037 | orchestrator | + flavor_id = (known after apply) 2026-04-04 00:02:25.456042 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-04 00:02:25.456053 | orchestrator | + force_delete = false 2026-04-04 00:02:25.456059 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-04 00:02:25.456065 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456070 | orchestrator | + image_id = (known after apply) 2026-04-04 00:02:25.456075 | orchestrator | + image_name = (known after apply) 2026-04-04 00:02:25.456081 | orchestrator | + key_pair = "testbed" 2026-04-04 00:02:25.456086 | orchestrator | + name = "testbed-node-5" 2026-04-04 00:02:25.456092 | orchestrator | + power_state = "active" 2026-04-04 00:02:25.456097 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456103 | orchestrator | + security_groups = (known after apply) 2026-04-04 00:02:25.456108 | orchestrator | + stop_before_destroy = false 2026-04-04 00:02:25.456113 | orchestrator | + updated = (known after apply) 2026-04-04 00:02:25.456119 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-04 00:02:25.456124 | orchestrator | 2026-04-04 00:02:25.456130 | orchestrator | + block_device { 2026-04-04 00:02:25.456135 | orchestrator | + boot_index = 0 2026-04-04 00:02:25.456141 | orchestrator | + delete_on_termination = false 2026-04-04 00:02:25.456146 | orchestrator | + destination_type = "volume" 2026-04-04 00:02:25.456151 | orchestrator | + multiattach = false 2026-04-04 00:02:25.456157 | orchestrator | + source_type = "volume" 2026-04-04 00:02:25.456162 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.456168 | orchestrator | } 2026-04-04 00:02:25.456173 | orchestrator | 2026-04-04 00:02:25.456179 | orchestrator | + network { 2026-04-04 00:02:25.456184 | orchestrator | + access_network = false 2026-04-04 00:02:25.456189 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-04 00:02:25.456195 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-04 00:02:25.456200 | orchestrator | + mac = (known after apply) 2026-04-04 00:02:25.456206 | orchestrator | + name = (known after apply) 2026-04-04 00:02:25.456211 | orchestrator | + port = (known after apply) 2026-04-04 00:02:25.456217 | orchestrator | + uuid = (known after apply) 2026-04-04 00:02:25.456222 | orchestrator | } 2026-04-04 00:02:25.456228 | orchestrator | } 2026-04-04 00:02:25.456233 | orchestrator | 2026-04-04 00:02:25.456238 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-04 00:02:25.456243 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-04 00:02:25.456248 | orchestrator | + fingerprint = (known after apply) 2026-04-04 00:02:25.456253 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456258 | orchestrator | + name = "testbed" 2026-04-04 00:02:25.456263 | orchestrator | + private_key = (sensitive value) 2026-04-04 00:02:25.456268 | orchestrator | + public_key = (known after apply) 2026-04-04 00:02:25.456273 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456277 | orchestrator | + user_id = (known after apply) 2026-04-04 00:02:25.456282 | orchestrator | } 2026-04-04 00:02:25.456288 | orchestrator | 2026-04-04 00:02:25.456292 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-04 00:02:25.456297 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456306 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456311 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456316 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456321 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456330 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456335 | orchestrator | } 2026-04-04 00:02:25.456340 | orchestrator | 2026-04-04 00:02:25.456345 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-04 00:02:25.456350 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456355 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456360 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456364 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456369 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456374 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456379 | orchestrator | } 2026-04-04 00:02:25.456384 | orchestrator | 2026-04-04 00:02:25.456389 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-04 00:02:25.456393 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456404 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456409 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456414 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456419 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456423 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456428 | orchestrator | } 2026-04-04 00:02:25.456433 | orchestrator | 2026-04-04 00:02:25.456438 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-04 00:02:25.456443 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456448 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456452 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456457 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456462 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456467 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456472 | orchestrator | } 2026-04-04 00:02:25.456477 | orchestrator | 2026-04-04 00:02:25.456482 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-04 00:02:25.456487 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456492 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456496 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456501 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456506 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456511 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456516 | orchestrator | } 2026-04-04 00:02:25.456520 | orchestrator | 2026-04-04 00:02:25.456525 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-04 00:02:25.456530 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456535 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456540 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456544 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456549 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456554 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456559 | orchestrator | } 2026-04-04 00:02:25.456564 | orchestrator | 2026-04-04 00:02:25.456569 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-04 00:02:25.456574 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456579 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456584 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456589 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456594 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456602 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456607 | orchestrator | } 2026-04-04 00:02:25.456612 | orchestrator | 2026-04-04 00:02:25.456617 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-04 00:02:25.456621 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456626 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456631 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456636 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456641 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456646 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456651 | orchestrator | } 2026-04-04 00:02:25.456655 | orchestrator | 2026-04-04 00:02:25.456660 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-04 00:02:25.456665 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-04 00:02:25.456670 | orchestrator | + device = (known after apply) 2026-04-04 00:02:25.456675 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456680 | orchestrator | + instance_id = (known after apply) 2026-04-04 00:02:25.456685 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456690 | orchestrator | + volume_id = (known after apply) 2026-04-04 00:02:25.456694 | orchestrator | } 2026-04-04 00:02:25.456699 | orchestrator | 2026-04-04 00:02:25.456704 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-04 00:02:25.456710 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-04 00:02:25.456715 | orchestrator | + fixed_ip = (known after apply) 2026-04-04 00:02:25.456720 | orchestrator | + floating_ip = (known after apply) 2026-04-04 00:02:25.456725 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456730 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:25.456735 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456739 | orchestrator | } 2026-04-04 00:02:25.456744 | orchestrator | 2026-04-04 00:02:25.456749 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-04 00:02:25.456755 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-04 00:02:25.456759 | orchestrator | + address = (known after apply) 2026-04-04 00:02:25.456764 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.456772 | orchestrator | + dns_domain = (known after apply) 2026-04-04 00:02:25.456777 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.456782 | orchestrator | + fixed_ip = (known after apply) 2026-04-04 00:02:25.456787 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456805 | orchestrator | + pool = "public" 2026-04-04 00:02:25.456810 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:25.456815 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456819 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.456824 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.456829 | orchestrator | } 2026-04-04 00:02:25.456834 | orchestrator | 2026-04-04 00:02:25.456839 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-04 00:02:25.456843 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-04 00:02:25.456848 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.456853 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.456858 | orchestrator | + availability_zone_hints = [ 2026-04-04 00:02:25.456863 | orchestrator | + "nova", 2026-04-04 00:02:25.456868 | orchestrator | ] 2026-04-04 00:02:25.456873 | orchestrator | + dns_domain = (known after apply) 2026-04-04 00:02:25.456877 | orchestrator | + external = (known after apply) 2026-04-04 00:02:25.456882 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.456887 | orchestrator | + mtu = (known after apply) 2026-04-04 00:02:25.456892 | orchestrator | + name = "net-testbed-management" 2026-04-04 00:02:25.456900 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.456909 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.456913 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.456918 | orchestrator | + shared = (known after apply) 2026-04-04 00:02:25.456923 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.456928 | orchestrator | + transparent_vlan = (known after apply) 2026-04-04 00:02:25.456933 | orchestrator | 2026-04-04 00:02:25.456938 | orchestrator | + segments (known after apply) 2026-04-04 00:02:25.456943 | orchestrator | } 2026-04-04 00:02:25.456947 | orchestrator | 2026-04-04 00:02:25.456952 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-04 00:02:25.456957 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-04 00:02:25.456962 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.456967 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.456972 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.456976 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.456981 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.456986 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.456991 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.456996 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457001 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457005 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457010 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457015 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457020 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457024 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457029 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457034 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457039 | orchestrator | 2026-04-04 00:02:25.457043 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457048 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457053 | orchestrator | } 2026-04-04 00:02:25.457058 | orchestrator | 2026-04-04 00:02:25.457063 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.457068 | orchestrator | 2026-04-04 00:02:25.457073 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.457078 | orchestrator | + ip_address = "192.168.16.5" 2026-04-04 00:02:25.457082 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.457087 | orchestrator | } 2026-04-04 00:02:25.457092 | orchestrator | } 2026-04-04 00:02:25.457097 | orchestrator | 2026-04-04 00:02:25.457102 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-04 00:02:25.457107 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.457112 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.457117 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.457122 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.457126 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.457131 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.457136 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.457141 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.457146 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457151 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457155 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457160 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457165 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457170 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457174 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457182 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457187 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457193 | orchestrator | 2026-04-04 00:02:25.457198 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457203 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.457208 | orchestrator | } 2026-04-04 00:02:25.457213 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457217 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457222 | orchestrator | } 2026-04-04 00:02:25.457227 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457232 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.457237 | orchestrator | } 2026-04-04 00:02:25.457242 | orchestrator | 2026-04-04 00:02:25.457247 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.457252 | orchestrator | 2026-04-04 00:02:25.457256 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.457261 | orchestrator | + ip_address = "192.168.16.10" 2026-04-04 00:02:25.457266 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.457271 | orchestrator | } 2026-04-04 00:02:25.457276 | orchestrator | } 2026-04-04 00:02:25.457280 | orchestrator | 2026-04-04 00:02:25.457285 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-04 00:02:25.457290 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.457298 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.457303 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.457308 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.457312 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.457317 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.457322 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.457327 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.457332 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457336 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457341 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457346 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457351 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457356 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457360 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457365 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457370 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457375 | orchestrator | 2026-04-04 00:02:25.457380 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457385 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.457389 | orchestrator | } 2026-04-04 00:02:25.457394 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457405 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457410 | orchestrator | } 2026-04-04 00:02:25.457415 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457420 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.457425 | orchestrator | } 2026-04-04 00:02:25.457429 | orchestrator | 2026-04-04 00:02:25.457434 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.457439 | orchestrator | 2026-04-04 00:02:25.457444 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.457449 | orchestrator | + ip_address = "192.168.16.11" 2026-04-04 00:02:25.457454 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.457458 | orchestrator | } 2026-04-04 00:02:25.457463 | orchestrator | } 2026-04-04 00:02:25.457468 | orchestrator | 2026-04-04 00:02:25.457473 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-04 00:02:25.457478 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.457482 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.457487 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.457492 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.457497 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.457505 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.457510 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.457515 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.457520 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457525 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457530 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457534 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457539 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457544 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457549 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457553 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457558 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457563 | orchestrator | 2026-04-04 00:02:25.457568 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457573 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.457578 | orchestrator | } 2026-04-04 00:02:25.457583 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457587 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457592 | orchestrator | } 2026-04-04 00:02:25.457597 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457602 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.457606 | orchestrator | } 2026-04-04 00:02:25.457611 | orchestrator | 2026-04-04 00:02:25.457616 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.457621 | orchestrator | 2026-04-04 00:02:25.457626 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.457630 | orchestrator | + ip_address = "192.168.16.12" 2026-04-04 00:02:25.457635 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.457640 | orchestrator | } 2026-04-04 00:02:25.457646 | orchestrator | } 2026-04-04 00:02:25.457650 | orchestrator | 2026-04-04 00:02:25.457655 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-04 00:02:25.457660 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.457665 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.457670 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.457675 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.457679 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.457684 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.457689 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.457694 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.457699 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457703 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457708 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457713 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457718 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457723 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457728 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457732 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457737 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457742 | orchestrator | 2026-04-04 00:02:25.457747 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457752 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.457756 | orchestrator | } 2026-04-04 00:02:25.457761 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457766 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457771 | orchestrator | } 2026-04-04 00:02:25.457776 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457781 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.457786 | orchestrator | } 2026-04-04 00:02:25.457802 | orchestrator | 2026-04-04 00:02:25.457811 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.457815 | orchestrator | 2026-04-04 00:02:25.457820 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.457825 | orchestrator | + ip_address = "192.168.16.13" 2026-04-04 00:02:25.457830 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.457835 | orchestrator | } 2026-04-04 00:02:25.457840 | orchestrator | } 2026-04-04 00:02:25.457845 | orchestrator | 2026-04-04 00:02:25.457850 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-04 00:02:25.457854 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.457859 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.457864 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.457869 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.457874 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.457879 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.457884 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.457888 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.457893 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.457903 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.457908 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.457913 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.457918 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.457922 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.457927 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.457932 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.457937 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.457943 | orchestrator | 2026-04-04 00:02:25.457951 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457959 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.457964 | orchestrator | } 2026-04-04 00:02:25.457969 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457974 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.457979 | orchestrator | } 2026-04-04 00:02:25.457984 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.457989 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.457994 | orchestrator | } 2026-04-04 00:02:25.457998 | orchestrator | 2026-04-04 00:02:25.458003 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.458008 | orchestrator | 2026-04-04 00:02:25.458013 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.458037 | orchestrator | + ip_address = "192.168.16.14" 2026-04-04 00:02:25.458042 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.458047 | orchestrator | } 2026-04-04 00:02:25.458052 | orchestrator | } 2026-04-04 00:02:25.458057 | orchestrator | 2026-04-04 00:02:25.458062 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-04 00:02:25.458067 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-04 00:02:25.458072 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.458077 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-04 00:02:25.458097 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-04 00:02:25.458102 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.458107 | orchestrator | + device_id = (known after apply) 2026-04-04 00:02:25.458112 | orchestrator | + device_owner = (known after apply) 2026-04-04 00:02:25.458117 | orchestrator | + dns_assignment = (known after apply) 2026-04-04 00:02:25.458122 | orchestrator | + dns_name = (known after apply) 2026-04-04 00:02:25.458127 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458132 | orchestrator | + mac_address = (known after apply) 2026-04-04 00:02:25.458137 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.458141 | orchestrator | + port_security_enabled = (known after apply) 2026-04-04 00:02:25.458146 | orchestrator | + qos_policy_id = (known after apply) 2026-04-04 00:02:25.458160 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458165 | orchestrator | + security_group_ids = (known after apply) 2026-04-04 00:02:25.458170 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458175 | orchestrator | 2026-04-04 00:02:25.458180 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.458185 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-04 00:02:25.458190 | orchestrator | } 2026-04-04 00:02:25.458195 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.458199 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-04 00:02:25.458204 | orchestrator | } 2026-04-04 00:02:25.458209 | orchestrator | + allowed_address_pairs { 2026-04-04 00:02:25.458214 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-04 00:02:25.458219 | orchestrator | } 2026-04-04 00:02:25.458223 | orchestrator | 2026-04-04 00:02:25.458228 | orchestrator | + binding (known after apply) 2026-04-04 00:02:25.458233 | orchestrator | 2026-04-04 00:02:25.458238 | orchestrator | + fixed_ip { 2026-04-04 00:02:25.458243 | orchestrator | + ip_address = "192.168.16.15" 2026-04-04 00:02:25.458248 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.458252 | orchestrator | } 2026-04-04 00:02:25.458257 | orchestrator | } 2026-04-04 00:02:25.458262 | orchestrator | 2026-04-04 00:02:25.458267 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-04 00:02:25.458272 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-04 00:02:25.458277 | orchestrator | + force_destroy = false 2026-04-04 00:02:25.458281 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458286 | orchestrator | + port_id = (known after apply) 2026-04-04 00:02:25.458291 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458296 | orchestrator | + router_id = (known after apply) 2026-04-04 00:02:25.458301 | orchestrator | + subnet_id = (known after apply) 2026-04-04 00:02:25.458306 | orchestrator | } 2026-04-04 00:02:25.458310 | orchestrator | 2026-04-04 00:02:25.458315 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-04 00:02:25.458320 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-04 00:02:25.458325 | orchestrator | + admin_state_up = (known after apply) 2026-04-04 00:02:25.458330 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.458334 | orchestrator | + availability_zone_hints = [ 2026-04-04 00:02:25.458339 | orchestrator | + "nova", 2026-04-04 00:02:25.458344 | orchestrator | ] 2026-04-04 00:02:25.458349 | orchestrator | + distributed = (known after apply) 2026-04-04 00:02:25.458354 | orchestrator | + enable_snat = (known after apply) 2026-04-04 00:02:25.458359 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-04 00:02:25.458363 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-04 00:02:25.458368 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458373 | orchestrator | + name = "testbed" 2026-04-04 00:02:25.458378 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458383 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458387 | orchestrator | 2026-04-04 00:02:25.458392 | orchestrator | + external_fixed_ip (known after apply) 2026-04-04 00:02:25.458406 | orchestrator | } 2026-04-04 00:02:25.458411 | orchestrator | 2026-04-04 00:02:25.458416 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-04 00:02:25.458421 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-04 00:02:25.458426 | orchestrator | + description = "ssh" 2026-04-04 00:02:25.458431 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458435 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458440 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458445 | orchestrator | + port_range_max = 22 2026-04-04 00:02:25.458450 | orchestrator | + port_range_min = 22 2026-04-04 00:02:25.458455 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:25.458460 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458469 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458474 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458478 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.458483 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458488 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458493 | orchestrator | } 2026-04-04 00:02:25.458498 | orchestrator | 2026-04-04 00:02:25.458503 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-04 00:02:25.458508 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-04 00:02:25.458512 | orchestrator | + description = "wireguard" 2026-04-04 00:02:25.458517 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458526 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458531 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458536 | orchestrator | + port_range_max = 51820 2026-04-04 00:02:25.458541 | orchestrator | + port_range_min = 51820 2026-04-04 00:02:25.458546 | orchestrator | + protocol = "udp" 2026-04-04 00:02:25.458550 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458555 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458560 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458565 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.458570 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458575 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458580 | orchestrator | } 2026-04-04 00:02:25.458585 | orchestrator | 2026-04-04 00:02:25.458589 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-04 00:02:25.458595 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-04 00:02:25.458603 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458608 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458612 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458617 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:25.458622 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458627 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458632 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458637 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-04 00:02:25.458642 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458646 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458651 | orchestrator | } 2026-04-04 00:02:25.458656 | orchestrator | 2026-04-04 00:02:25.458661 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-04 00:02:25.458666 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-04 00:02:25.458671 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458676 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458681 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458686 | orchestrator | + protocol = "udp" 2026-04-04 00:02:25.458690 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458695 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458700 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458705 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-04 00:02:25.458710 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458715 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458720 | orchestrator | } 2026-04-04 00:02:25.458724 | orchestrator | 2026-04-04 00:02:25.458729 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-04 00:02:25.458738 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-04 00:02:25.458743 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458747 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458752 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458757 | orchestrator | + protocol = "icmp" 2026-04-04 00:02:25.458762 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458767 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458772 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458777 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.458781 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458786 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458806 | orchestrator | } 2026-04-04 00:02:25.458811 | orchestrator | 2026-04-04 00:02:25.458816 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-04 00:02:25.458821 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-04 00:02:25.458826 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458831 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458836 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458841 | orchestrator | + protocol = "tcp" 2026-04-04 00:02:25.458846 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458851 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458855 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458860 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.458865 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458870 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458875 | orchestrator | } 2026-04-04 00:02:25.458880 | orchestrator | 2026-04-04 00:02:25.458885 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-04 00:02:25.458889 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-04 00:02:25.458894 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458899 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458904 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458909 | orchestrator | + protocol = "udp" 2026-04-04 00:02:25.458914 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458919 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458923 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.458928 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.458933 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.458938 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.458943 | orchestrator | } 2026-04-04 00:02:25.458948 | orchestrator | 2026-04-04 00:02:25.458953 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-04 00:02:25.458958 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-04 00:02:25.458966 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.458971 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.458976 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.458981 | orchestrator | + protocol = "icmp" 2026-04-04 00:02:25.458985 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.458990 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.458995 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.459000 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.459005 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.459009 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.459018 | orchestrator | } 2026-04-04 00:02:25.459023 | orchestrator | 2026-04-04 00:02:25.459028 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-04 00:02:25.459032 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-04 00:02:25.459037 | orchestrator | + description = "vrrp" 2026-04-04 00:02:25.459042 | orchestrator | + direction = "ingress" 2026-04-04 00:02:25.459047 | orchestrator | + ethertype = "IPv4" 2026-04-04 00:02:25.459052 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459057 | orchestrator | + protocol = "112" 2026-04-04 00:02:25.459061 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.459066 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-04 00:02:25.459071 | orchestrator | + remote_group_id = (known after apply) 2026-04-04 00:02:25.459076 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-04 00:02:25.459081 | orchestrator | + security_group_id = (known after apply) 2026-04-04 00:02:25.459086 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.459090 | orchestrator | } 2026-04-04 00:02:25.459095 | orchestrator | 2026-04-04 00:02:25.459100 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-04 00:02:25.459105 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-04 00:02:25.459110 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.459115 | orchestrator | + description = "management security group" 2026-04-04 00:02:25.459120 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459125 | orchestrator | + name = "testbed-management" 2026-04-04 00:02:25.459129 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.459134 | orchestrator | + stateful = (known after apply) 2026-04-04 00:02:25.459139 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.459144 | orchestrator | } 2026-04-04 00:02:25.459149 | orchestrator | 2026-04-04 00:02:25.459154 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-04 00:02:25.459158 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-04 00:02:25.459163 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.459168 | orchestrator | + description = "node security group" 2026-04-04 00:02:25.459173 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459178 | orchestrator | + name = "testbed-node" 2026-04-04 00:02:25.459183 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.459188 | orchestrator | + stateful = (known after apply) 2026-04-04 00:02:25.459193 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.459198 | orchestrator | } 2026-04-04 00:02:25.459203 | orchestrator | 2026-04-04 00:02:25.459208 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-04 00:02:25.459212 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-04 00:02:25.459217 | orchestrator | + all_tags = (known after apply) 2026-04-04 00:02:25.459222 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-04 00:02:25.459227 | orchestrator | + dns_nameservers = [ 2026-04-04 00:02:25.459232 | orchestrator | + "8.8.8.8", 2026-04-04 00:02:25.459237 | orchestrator | + "9.9.9.9", 2026-04-04 00:02:25.459242 | orchestrator | ] 2026-04-04 00:02:25.459247 | orchestrator | + enable_dhcp = true 2026-04-04 00:02:25.459252 | orchestrator | + gateway_ip = (known after apply) 2026-04-04 00:02:25.459260 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459265 | orchestrator | + ip_version = 4 2026-04-04 00:02:25.459270 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-04 00:02:25.459275 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-04 00:02:25.459280 | orchestrator | + name = "subnet-testbed-management" 2026-04-04 00:02:25.459284 | orchestrator | + network_id = (known after apply) 2026-04-04 00:02:25.459289 | orchestrator | + no_gateway = false 2026-04-04 00:02:25.459294 | orchestrator | + region = (known after apply) 2026-04-04 00:02:25.459299 | orchestrator | + service_types = (known after apply) 2026-04-04 00:02:25.459308 | orchestrator | + tenant_id = (known after apply) 2026-04-04 00:02:25.459313 | orchestrator | 2026-04-04 00:02:25.459318 | orchestrator | + allocation_pool { 2026-04-04 00:02:25.459323 | orchestrator | + end = "192.168.31.250" 2026-04-04 00:02:25.459327 | orchestrator | + start = "192.168.31.200" 2026-04-04 00:02:25.459332 | orchestrator | } 2026-04-04 00:02:25.459337 | orchestrator | } 2026-04-04 00:02:25.459342 | orchestrator | 2026-04-04 00:02:25.459347 | orchestrator | # terraform_data.image will be created 2026-04-04 00:02:25.459352 | orchestrator | + resource "terraform_data" "image" { 2026-04-04 00:02:25.459357 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459362 | orchestrator | + input = "Ubuntu 24.04" 2026-04-04 00:02:25.459366 | orchestrator | + output = (known after apply) 2026-04-04 00:02:25.459371 | orchestrator | } 2026-04-04 00:02:25.459376 | orchestrator | 2026-04-04 00:02:25.459381 | orchestrator | # terraform_data.image_node will be created 2026-04-04 00:02:25.459386 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-04 00:02:25.459391 | orchestrator | + id = (known after apply) 2026-04-04 00:02:25.459396 | orchestrator | + input = "Ubuntu 24.04" 2026-04-04 00:02:25.459400 | orchestrator | + output = (known after apply) 2026-04-04 00:02:25.459405 | orchestrator | } 2026-04-04 00:02:25.459410 | orchestrator | 2026-04-04 00:02:25.459415 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-04 00:02:25.459420 | orchestrator | 2026-04-04 00:02:25.459425 | orchestrator | Changes to Outputs: 2026-04-04 00:02:25.459430 | orchestrator | + manager_address = (sensitive value) 2026-04-04 00:02:25.459435 | orchestrator | + private_key = (sensitive value) 2026-04-04 00:02:25.593145 | orchestrator | terraform_data.image_node: Creating... 2026-04-04 00:02:25.728195 | orchestrator | terraform_data.image: Creating... 2026-04-04 00:02:25.728254 | orchestrator | terraform_data.image: Creation complete after 0s [id=6ecfd981-06b9-f04b-1397-341e1b8c1177] 2026-04-04 00:02:25.728268 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=44b3e380-9be5-611d-81c2-7a79a00cc839] 2026-04-04 00:02:25.748262 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-04 00:02:25.754719 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-04 00:02:25.760030 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-04 00:02:25.760937 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-04 00:02:25.761916 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-04 00:02:25.765742 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-04 00:02:25.765811 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-04 00:02:25.778231 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-04 00:02:25.778346 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-04 00:02:25.778413 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-04 00:02:26.247908 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-04 00:02:26.247959 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-04 00:02:26.247965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-04 00:02:26.251682 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-04 00:02:26.406933 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-04-04 00:02:26.413660 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-04 00:02:26.953407 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=10550a19-c187-4466-9844-656805d25cd1] 2026-04-04 00:02:26.957029 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-04 00:02:29.409328 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=aea0a796-d357-4fa7-8d72-1f8005c02d55] 2026-04-04 00:02:29.413874 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-04 00:02:29.430704 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=aa04dcb3-9f04-4660-8785-ade3b95c2bd8] 2026-04-04 00:02:29.436319 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-04 00:02:29.460669 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=19f8077a-5fb2-4798-9d2e-069ef293e905] 2026-04-04 00:02:29.460758 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=06ea839a-b266-4e51-93b3-b1dda83a55b8] 2026-04-04 00:02:29.478923 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-04 00:02:29.483409 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-04 00:02:29.496294 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=86e206f3-2d5a-4624-95fc-aec866356159] 2026-04-04 00:02:29.503045 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-04 00:02:29.539466 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=4d96aee6-67ba-49f8-bc7c-2d85a42af737] 2026-04-04 00:02:29.544825 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e5c55c1d-a7d7-4703-805a-3622b0d8a5d5] 2026-04-04 00:02:29.544881 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-04 00:02:29.556665 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-04 00:02:29.567622 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=c489299de3855cac81b60e6f879d52f236215766] 2026-04-04 00:02:29.575523 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=b430c263-2f81-418d-8192-e181c70d45ae] 2026-04-04 00:02:29.575572 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-04 00:02:29.580573 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-04 00:02:29.586884 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=6219a71ec1e0a4fe6809c7502b93be8b292cd529] 2026-04-04 00:02:29.763529 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=5b6ff0f2-3c26-4156-872a-5361d1bd2bb9] 2026-04-04 00:02:30.307878 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=3c579845-c8df-472e-b97f-01d742bc5a30] 2026-04-04 00:02:30.671259 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=8f7cd792-bcb6-4183-b546-37dda0cec2ad] 2026-04-04 00:02:30.676825 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-04 00:02:32.880652 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=e8724c57-8a81-4b1a-b62f-30f3282a03e2] 2026-04-04 00:02:32.928484 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=c54ed99d-8116-431b-a73a-2dbb6ef64fe0] 2026-04-04 00:02:32.949971 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=293611d7-1fce-43f4-9a31-69acc773bdc1] 2026-04-04 00:02:32.973769 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=1df993b0-f2e3-4765-ad08-d2a9ca0c61ae] 2026-04-04 00:02:33.698984 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 5s [id=43a170e0-9151-405a-b413-7377f27a751c] 2026-04-04 00:02:36.493416 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=c7223361-eb25-4952-96a2-78fcadfdbdca] 2026-04-04 00:02:36.493479 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=f75c55f6-9419-45e7-93b4-4881457cdea0] 2026-04-04 00:02:36.493492 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-04 00:02:36.493502 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-04 00:02:36.493512 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-04 00:02:36.493521 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=6c076bf1-7065-4193-a97c-9b7a1fb7d804] 2026-04-04 00:02:36.493530 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-04 00:02:36.493539 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-04 00:02:36.493548 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-04 00:02:36.493578 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-04 00:02:36.493587 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-04 00:02:36.493596 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-04 00:02:36.493606 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=f2e5f51a-19bc-407c-9cc6-7daaf21a1ac2] 2026-04-04 00:02:36.493615 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-04 00:02:36.493623 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-04 00:02:36.493632 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-04 00:02:36.493655 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=afce8cbb-9025-4845-ad71-906096ed691e] 2026-04-04 00:02:36.493666 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-04 00:02:36.493675 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=ec31c847-6259-46fc-a764-38ee60fa05fe] 2026-04-04 00:02:36.493684 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-04 00:02:36.493692 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=be5e4eaf-fcd7-4245-8c03-7466b2e800c5] 2026-04-04 00:02:36.493701 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-04 00:02:36.493710 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=a713c87d-1ffb-4496-98a2-a2ed7234cf5a] 2026-04-04 00:02:36.493718 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=c3a16ef5-be67-44d2-ac9e-53724911c930] 2026-04-04 00:02:36.493727 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-04 00:02:36.493736 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-04 00:02:36.493745 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=30ee153e-077c-4b82-9068-7521e9675593] 2026-04-04 00:02:36.493754 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-04 00:02:36.493763 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=ce727481-611a-4b64-8a11-f971bacf4cec] 2026-04-04 00:02:36.493772 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-04 00:02:36.493781 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=30fc91b9-be6c-4231-907e-39f2b336c168] 2026-04-04 00:02:36.493790 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=d4a2d7ef-8a0b-4dbc-95ac-6fa1b5b2544d] 2026-04-04 00:02:36.493798 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=f90da1bd-275b-428e-9b1a-ec959c78347e] 2026-04-04 00:02:36.493807 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=2e69247a-4ff0-41e1-be07-a81ad57a4cee] 2026-04-04 00:02:36.493844 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=1607503f-8911-4273-a9d0-71611bbeea60] 2026-04-04 00:02:36.493859 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=5152e744-65e0-478c-a9d2-d5de8bb6d756] 2026-04-04 00:02:36.493868 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0f44392b-1814-4b4e-ad46-d6bbcc0d799c] 2026-04-04 00:02:36.493876 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=794e4a94-8fc3-4283-9a14-4429c1464a51] 2026-04-04 00:02:36.493885 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=7db8d041-ebbb-47b0-a7a8-c90ee9e4ccbe] 2026-04-04 00:02:41.034262 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=e63708df-8d85-43f1-91a5-07d074b37ba9] 2026-04-04 00:02:41.050684 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-04 00:02:41.070060 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-04 00:02:41.072735 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-04 00:02:41.073964 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-04 00:02:41.077537 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-04 00:02:41.087947 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-04 00:02:41.092354 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-04 00:02:43.830453 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=f7cf1620-e7ea-4df9-96d4-092068c88f8f] 2026-04-04 00:02:43.839444 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-04 00:02:43.847868 | orchestrator | local_file.inventory: Creating... 2026-04-04 00:02:43.848238 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-04 00:02:43.852327 | orchestrator | local_file.inventory: Creation complete after 0s [id=d55878c89f7d105f6b7b5f99f39b9e670234a633] 2026-04-04 00:02:43.854159 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=019601c940087fb05a454e6d5a275b3451da6c7c] 2026-04-04 00:02:45.548978 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=f7cf1620-e7ea-4df9-96d4-092068c88f8f] 2026-04-04 00:02:51.080250 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-04 00:02:51.080365 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-04 00:02:51.080380 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-04 00:02:51.082475 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-04 00:02:51.091865 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-04 00:02:51.091920 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-04 00:03:01.089710 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-04 00:03:01.089818 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-04 00:03:01.089832 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-04 00:03:01.089854 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-04 00:03:01.093001 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-04 00:03:01.093032 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-04 00:03:11.098940 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-04-04 00:03:11.099077 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-04 00:03:11.099125 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-04-04 00:03:11.099140 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-04 00:03:11.099152 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-04 00:03:11.099163 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-04-04 00:03:11.805307 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=ac61db4c-4dab-4704-a293-4538398819b4] 2026-04-04 00:03:11.871565 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=c7985393-8277-4e4c-86b4-9007e0d68aa6] 2026-04-04 00:03:11.949645 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=6dc59444-7077-45a6-aa48-788db8c0a89d] 2026-04-04 00:03:12.046249 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=a03d3daf-9e7b-4b77-9725-152c261b98ab] 2026-04-04 00:03:21.107652 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-04-04 00:03:21.107736 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-04-04 00:03:22.189933 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=8d29d403-55e0-4d98-ac36-f7681822d5cc] 2026-04-04 00:03:22.192456 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=d54028dc-9ac6-45e8-baff-d9a77f596fb8] 2026-04-04 00:03:22.214660 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-04 00:03:22.216133 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-04 00:03:22.219958 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-04 00:03:22.222861 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-04 00:03:22.224360 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-04 00:03:22.226366 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4851495494568505936] 2026-04-04 00:03:22.228203 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-04 00:03:22.230284 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-04 00:03:22.232560 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-04 00:03:22.246371 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-04 00:03:22.246689 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-04 00:03:22.255653 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-04 00:03:25.632043 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=a03d3daf-9e7b-4b77-9725-152c261b98ab/e5c55c1d-a7d7-4703-805a-3622b0d8a5d5] 2026-04-04 00:03:25.649936 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=ac61db4c-4dab-4704-a293-4538398819b4/5b6ff0f2-3c26-4156-872a-5361d1bd2bb9] 2026-04-04 00:03:25.667166 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=8d29d403-55e0-4d98-ac36-f7681822d5cc/06ea839a-b266-4e51-93b3-b1dda83a55b8] 2026-04-04 00:03:25.679215 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=ac61db4c-4dab-4704-a293-4538398819b4/4d96aee6-67ba-49f8-bc7c-2d85a42af737] 2026-04-04 00:03:31.752263 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=a03d3daf-9e7b-4b77-9725-152c261b98ab/19f8077a-5fb2-4798-9d2e-069ef293e905] 2026-04-04 00:03:31.792636 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=8d29d403-55e0-4d98-ac36-f7681822d5cc/86e206f3-2d5a-4624-95fc-aec866356159] 2026-04-04 00:03:31.897099 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=ac61db4c-4dab-4704-a293-4538398819b4/aa04dcb3-9f04-4660-8785-ade3b95c2bd8] 2026-04-04 00:03:31.915306 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=8d29d403-55e0-4d98-ac36-f7681822d5cc/aea0a796-d357-4fa7-8d72-1f8005c02d55] 2026-04-04 00:03:31.931190 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=a03d3daf-9e7b-4b77-9725-152c261b98ab/b430c263-2f81-418d-8192-e181c70d45ae] 2026-04-04 00:03:32.251045 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-04 00:03:42.251425 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-04 00:03:42.687748 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=f8a76c2b-0f22-4d01-8fa2-ee7c026fe25e] 2026-04-04 00:03:42.706180 | orchestrator | 2026-04-04 00:03:42.706278 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-04 00:03:42.706286 | orchestrator | 2026-04-04 00:03:42.706291 | orchestrator | Outputs: 2026-04-04 00:03:42.706296 | orchestrator | 2026-04-04 00:03:42.706300 | orchestrator | manager_address = 2026-04-04 00:03:42.706316 | orchestrator | private_key = 2026-04-04 00:03:43.046146 | orchestrator | ok: Runtime: 0:01:23.087739 2026-04-04 00:03:43.084287 | 2026-04-04 00:03:43.084567 | TASK [Create infrastructure (stable)] 2026-04-04 00:03:43.622785 | orchestrator | skipping: Conditional result was False 2026-04-04 00:03:43.644724 | 2026-04-04 00:03:43.644917 | TASK [Fetch manager address] 2026-04-04 00:03:44.129353 | orchestrator | ok 2026-04-04 00:03:44.138669 | 2026-04-04 00:03:44.138799 | TASK [Set manager_host address] 2026-04-04 00:03:44.234183 | orchestrator | ok 2026-04-04 00:03:44.244984 | 2026-04-04 00:03:44.245135 | LOOP [Update ansible collections] 2026-04-04 00:03:45.309458 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-04 00:03:45.309825 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:03:45.309886 | orchestrator | Starting galaxy collection install process 2026-04-04 00:03:45.309925 | orchestrator | Process install dependency map 2026-04-04 00:03:45.310017 | orchestrator | Starting collection install process 2026-04-04 00:03:45.310052 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-04-04 00:03:45.310092 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-04-04 00:03:45.310143 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-04 00:03:45.310219 | orchestrator | ok: Item: commons Runtime: 0:00:00.729460 2026-04-04 00:03:46.233730 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:03:46.233861 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-04 00:03:46.233892 | orchestrator | Starting galaxy collection install process 2026-04-04 00:03:46.233915 | orchestrator | Process install dependency map 2026-04-04 00:03:46.233957 | orchestrator | Starting collection install process 2026-04-04 00:03:46.233979 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-04-04 00:03:46.233999 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-04-04 00:03:46.234020 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-04 00:03:46.234057 | orchestrator | ok: Item: services Runtime: 0:00:00.649796 2026-04-04 00:03:46.253410 | 2026-04-04 00:03:46.253560 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-04 00:03:56.866572 | orchestrator | ok 2026-04-04 00:03:56.881693 | 2026-04-04 00:03:56.881874 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-04 00:04:56.925858 | orchestrator | ok 2026-04-04 00:04:56.935593 | 2026-04-04 00:04:56.935702 | TASK [Fetch manager ssh hostkey] 2026-04-04 00:04:59.011040 | orchestrator | Output suppressed because no_log was given 2026-04-04 00:04:59.020879 | 2026-04-04 00:04:59.021097 | TASK [Get ssh keypair from terraform environment] 2026-04-04 00:04:59.554353 | orchestrator | ok: Runtime: 0:00:00.006986 2026-04-04 00:04:59.569482 | 2026-04-04 00:04:59.569649 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-04 00:04:59.612115 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-04 00:04:59.625880 | 2026-04-04 00:04:59.626133 | TASK [Run manager part 0] 2026-04-04 00:05:00.513472 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:05:00.566899 | orchestrator | 2026-04-04 00:05:00.566939 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-04 00:05:00.566945 | orchestrator | 2026-04-04 00:05:00.566958 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-04 00:05:04.790154 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:04.790246 | orchestrator | 2026-04-04 00:05:04.790280 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-04 00:05:04.790291 | orchestrator | 2026-04-04 00:05:04.790300 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:05:06.809935 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:06.810041 | orchestrator | 2026-04-04 00:05:06.810057 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-04 00:05:07.499659 | orchestrator | ok: [testbed-manager] 2026-04-04 00:05:07.499745 | orchestrator | 2026-04-04 00:05:07.499760 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-04 00:05:07.545072 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:07.545155 | orchestrator | 2026-04-04 00:05:07.545168 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-04 00:05:07.579486 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:07.579566 | orchestrator | 2026-04-04 00:05:07.579577 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-04 00:05:07.616390 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:05:07.616462 | orchestrator | 2026-04-04 00:05:07.616474 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-04 00:05:08.329437 | orchestrator | changed: [testbed-manager] 2026-04-04 00:05:08.329486 | orchestrator | 2026-04-04 00:05:08.329493 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-04 00:08:18.017247 | orchestrator | changed: [testbed-manager] 2026-04-04 00:08:18.017361 | orchestrator | 2026-04-04 00:08:18.017382 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-04 00:09:41.487822 | orchestrator | changed: [testbed-manager] 2026-04-04 00:09:41.488191 | orchestrator | 2026-04-04 00:09:41.488226 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-04 00:10:05.752701 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:05.752806 | orchestrator | 2026-04-04 00:10:05.752825 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-04 00:10:14.332468 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:14.332576 | orchestrator | 2026-04-04 00:10:14.332584 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-04 00:10:14.386653 | orchestrator | ok: [testbed-manager] 2026-04-04 00:10:14.386720 | orchestrator | 2026-04-04 00:10:14.386736 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-04 00:10:15.192638 | orchestrator | ok: [testbed-manager] 2026-04-04 00:10:15.192697 | orchestrator | 2026-04-04 00:10:15.192707 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-04 00:10:15.969132 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:15.969180 | orchestrator | 2026-04-04 00:10:15.969189 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-04 00:10:22.232172 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:22.232276 | orchestrator | 2026-04-04 00:10:22.232294 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-04 00:10:28.220388 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:28.220434 | orchestrator | 2026-04-04 00:10:28.220443 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-04 00:10:30.916343 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:30.916414 | orchestrator | 2026-04-04 00:10:30.916425 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-04 00:10:32.622166 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:32.622279 | orchestrator | 2026-04-04 00:10:32.622298 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-04 00:10:33.751854 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-04 00:10:33.751920 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-04 00:10:33.751930 | orchestrator | 2026-04-04 00:10:33.751942 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-04 00:10:33.798419 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-04 00:10:33.798508 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-04 00:10:33.798525 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-04 00:10:33.798541 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-04 00:10:37.234202 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-04 00:10:37.234291 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-04 00:10:37.234300 | orchestrator | 2026-04-04 00:10:37.234307 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-04 00:10:37.840449 | orchestrator | changed: [testbed-manager] 2026-04-04 00:10:37.840491 | orchestrator | 2026-04-04 00:10:37.840498 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-04 00:11:01.361168 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-04 00:11:01.361257 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-04 00:11:01.361273 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-04 00:11:01.361283 | orchestrator | 2026-04-04 00:11:01.361295 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-04 00:11:03.697274 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-04 00:11:03.697370 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-04 00:11:03.697386 | orchestrator | 2026-04-04 00:11:03.697402 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-04 00:11:03.697416 | orchestrator | 2026-04-04 00:11:03.697428 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:11:05.104799 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:05.104848 | orchestrator | 2026-04-04 00:11:05.104857 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-04 00:11:05.158716 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:05.158802 | orchestrator | 2026-04-04 00:11:05.158820 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-04 00:11:05.233241 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:05.233298 | orchestrator | 2026-04-04 00:11:05.233304 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-04 00:11:06.019313 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:06.019410 | orchestrator | 2026-04-04 00:11:06.019430 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-04 00:11:06.737286 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:06.737384 | orchestrator | 2026-04-04 00:11:06.737408 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-04 00:11:08.083658 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-04 00:11:08.083703 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-04 00:11:08.083711 | orchestrator | 2026-04-04 00:11:08.083720 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-04 00:11:09.454137 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:09.454246 | orchestrator | 2026-04-04 00:11:09.454261 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-04 00:11:11.188023 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:11:11.188067 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-04 00:11:11.188083 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:11:11.188089 | orchestrator | 2026-04-04 00:11:11.188097 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-04 00:11:11.229596 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:11.229746 | orchestrator | 2026-04-04 00:11:11.229756 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-04 00:11:11.308172 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:11.308279 | orchestrator | 2026-04-04 00:11:11.308295 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-04 00:11:11.882122 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:11.882236 | orchestrator | 2026-04-04 00:11:11.882254 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-04 00:11:11.946212 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:11.946303 | orchestrator | 2026-04-04 00:11:11.946322 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-04 00:11:12.815889 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:11:12.816039 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:12.816058 | orchestrator | 2026-04-04 00:11:12.816071 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-04 00:11:12.850986 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:12.851051 | orchestrator | 2026-04-04 00:11:12.851061 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-04 00:11:12.883225 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:12.883286 | orchestrator | 2026-04-04 00:11:12.883299 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-04 00:11:12.920078 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:12.920168 | orchestrator | 2026-04-04 00:11:12.920213 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-04 00:11:12.996126 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:12.996250 | orchestrator | 2026-04-04 00:11:12.996270 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-04 00:11:13.715873 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:13.715967 | orchestrator | 2026-04-04 00:11:13.715986 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-04 00:11:13.715999 | orchestrator | 2026-04-04 00:11:13.716015 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:11:15.122146 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:15.122262 | orchestrator | 2026-04-04 00:11:15.122279 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-04 00:11:16.094308 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:16.094381 | orchestrator | 2026-04-04 00:11:16.094389 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:11:16.094395 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-04 00:11:16.094399 | orchestrator | 2026-04-04 00:11:16.382539 | orchestrator | ok: Runtime: 0:06:16.292729 2026-04-04 00:11:16.402365 | 2026-04-04 00:11:16.402527 | TASK [Point out that the log in on the manager is now possible] 2026-04-04 00:11:16.440351 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-04 00:11:16.452466 | 2026-04-04 00:11:16.452643 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-04 00:11:16.493379 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-04 00:11:16.503561 | 2026-04-04 00:11:16.503737 | TASK [Run manager part 1 + 2] 2026-04-04 00:11:17.340262 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-04 00:11:17.420416 | orchestrator | 2026-04-04 00:11:17.420500 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-04 00:11:17.420522 | orchestrator | 2026-04-04 00:11:17.420554 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:11:19.953226 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:19.953308 | orchestrator | 2026-04-04 00:11:19.953330 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-04 00:11:19.989943 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:19.989992 | orchestrator | 2026-04-04 00:11:19.990002 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-04 00:11:20.039925 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:20.039965 | orchestrator | 2026-04-04 00:11:20.039972 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:11:20.081667 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:20.081724 | orchestrator | 2026-04-04 00:11:20.081734 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:11:20.144207 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:20.144253 | orchestrator | 2026-04-04 00:11:20.144261 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:11:20.209831 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:20.209883 | orchestrator | 2026-04-04 00:11:20.209892 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:11:20.263251 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-04 00:11:20.263297 | orchestrator | 2026-04-04 00:11:20.263303 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:11:21.017050 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:21.017103 | orchestrator | 2026-04-04 00:11:21.017113 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:11:21.066549 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:21.066594 | orchestrator | 2026-04-04 00:11:21.066599 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:11:22.448968 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:22.449027 | orchestrator | 2026-04-04 00:11:22.449036 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:11:23.035908 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:23.035996 | orchestrator | 2026-04-04 00:11:23.036010 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:11:24.175206 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:24.175278 | orchestrator | 2026-04-04 00:11:24.175297 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:11:39.247993 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:39.248034 | orchestrator | 2026-04-04 00:11:39.248040 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-04 00:11:39.913735 | orchestrator | ok: [testbed-manager] 2026-04-04 00:11:39.913808 | orchestrator | 2026-04-04 00:11:39.913819 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-04 00:11:39.973574 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:39.973613 | orchestrator | 2026-04-04 00:11:39.973620 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-04 00:11:40.947732 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:40.947772 | orchestrator | 2026-04-04 00:11:40.947779 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-04 00:11:41.877609 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:41.877645 | orchestrator | 2026-04-04 00:11:41.877652 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-04 00:11:42.481474 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:42.481529 | orchestrator | 2026-04-04 00:11:42.481543 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-04 00:11:42.523293 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-04 00:11:42.523373 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-04 00:11:42.523384 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-04 00:11:42.523391 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-04 00:11:45.097963 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:45.098004 | orchestrator | 2026-04-04 00:11:45.098010 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-04 00:11:53.584294 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-04 00:11:53.584507 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-04 00:11:53.584546 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-04 00:11:53.584575 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-04 00:11:53.584602 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-04 00:11:53.584620 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-04 00:11:53.584639 | orchestrator | 2026-04-04 00:11:53.584658 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-04 00:11:54.546943 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:54.547088 | orchestrator | 2026-04-04 00:11:54.547104 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-04 00:11:57.397614 | orchestrator | changed: [testbed-manager] 2026-04-04 00:11:57.397704 | orchestrator | 2026-04-04 00:11:57.397722 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-04 00:11:57.434945 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:11:57.435033 | orchestrator | 2026-04-04 00:11:57.435049 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-04 00:13:29.132813 | orchestrator | changed: [testbed-manager] 2026-04-04 00:13:29.132916 | orchestrator | 2026-04-04 00:13:29.132933 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:13:30.244021 | orchestrator | ok: [testbed-manager] 2026-04-04 00:13:30.244060 | orchestrator | 2026-04-04 00:13:30.244068 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:13:30.244075 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-04 00:13:30.244080 | orchestrator | 2026-04-04 00:13:30.613221 | orchestrator | ok: Runtime: 0:02:13.539565 2026-04-04 00:13:30.631322 | 2026-04-04 00:13:30.631491 | TASK [Reboot manager] 2026-04-04 00:13:32.172282 | orchestrator | ok: Runtime: 0:00:00.947826 2026-04-04 00:13:32.190963 | 2026-04-04 00:13:32.191127 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-04 00:13:46.726607 | orchestrator | ok 2026-04-04 00:13:46.737713 | 2026-04-04 00:13:46.737841 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-04 00:14:46.788160 | orchestrator | ok 2026-04-04 00:14:46.798485 | 2026-04-04 00:14:46.798613 | TASK [Deploy manager + bootstrap nodes] 2026-04-04 00:14:49.272021 | orchestrator | 2026-04-04 00:14:49.272223 | orchestrator | # DEPLOY MANAGER 2026-04-04 00:14:49.272247 | orchestrator | 2026-04-04 00:14:49.272260 | orchestrator | + set -e 2026-04-04 00:14:49.272272 | orchestrator | + echo 2026-04-04 00:14:49.272286 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-04 00:14:49.272302 | orchestrator | + echo 2026-04-04 00:14:49.272344 | orchestrator | + cat /opt/manager-vars.sh 2026-04-04 00:14:49.275229 | orchestrator | export NUMBER_OF_NODES=6 2026-04-04 00:14:49.275264 | orchestrator | 2026-04-04 00:14:49.275276 | orchestrator | export CEPH_VERSION=reef 2026-04-04 00:14:49.275288 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-04 00:14:49.275299 | orchestrator | export MANAGER_VERSION=latest 2026-04-04 00:14:49.275320 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-04 00:14:49.275330 | orchestrator | 2026-04-04 00:14:49.275347 | orchestrator | export ARA=false 2026-04-04 00:14:49.275357 | orchestrator | export DEPLOY_MODE=manager 2026-04-04 00:14:49.275373 | orchestrator | export TEMPEST=true 2026-04-04 00:14:49.275384 | orchestrator | export IS_ZUUL=true 2026-04-04 00:14:49.275420 | orchestrator | 2026-04-04 00:14:49.275438 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:14:49.275449 | orchestrator | export EXTERNAL_API=false 2026-04-04 00:14:49.275459 | orchestrator | 2026-04-04 00:14:49.275469 | orchestrator | export IMAGE_USER=ubuntu 2026-04-04 00:14:49.275486 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-04 00:14:49.275502 | orchestrator | 2026-04-04 00:14:49.275517 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-04 00:14:49.275541 | orchestrator | 2026-04-04 00:14:49.275558 | orchestrator | + echo 2026-04-04 00:14:49.275576 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:14:49.276483 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:14:49.276571 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:14:49.276592 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:14:49.276701 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:14:49.276734 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:14:49.276750 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:14:49.276767 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:14:49.276782 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:14:49.276799 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:14:49.276815 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:14:49.276834 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:14:49.276845 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:14:49.276854 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:14:49.276864 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 00:14:49.276884 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 00:14:49.276894 | orchestrator | ++ export ARA=false 2026-04-04 00:14:49.276904 | orchestrator | ++ ARA=false 2026-04-04 00:14:49.276914 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:14:49.276923 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:14:49.276933 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:14:49.276942 | orchestrator | ++ TEMPEST=true 2026-04-04 00:14:49.276952 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:14:49.276961 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:14:49.276970 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:14:49.276980 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:14:49.276989 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:14:49.276999 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:14:49.277008 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:14:49.277018 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:14:49.277028 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:14:49.277037 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:14:49.277047 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:14:49.277057 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:14:49.277067 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-04 00:14:49.328424 | orchestrator | + docker version 2026-04-04 00:14:49.435216 | orchestrator | Client: Docker Engine - Community 2026-04-04 00:14:49.435325 | orchestrator | Version: 27.5.1 2026-04-04 00:14:49.435340 | orchestrator | API version: 1.47 2026-04-04 00:14:49.435403 | orchestrator | Go version: go1.22.11 2026-04-04 00:14:49.435418 | orchestrator | Git commit: 9f9e405 2026-04-04 00:14:49.435430 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-04 00:14:49.435442 | orchestrator | OS/Arch: linux/amd64 2026-04-04 00:14:49.435453 | orchestrator | Context: default 2026-04-04 00:14:49.435464 | orchestrator | 2026-04-04 00:14:49.435476 | orchestrator | Server: Docker Engine - Community 2026-04-04 00:14:49.435488 | orchestrator | Engine: 2026-04-04 00:14:49.435512 | orchestrator | Version: 27.5.1 2026-04-04 00:14:49.435525 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-04 00:14:49.435563 | orchestrator | Go version: go1.22.11 2026-04-04 00:14:49.435575 | orchestrator | Git commit: 4c9b3b0 2026-04-04 00:14:49.435586 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-04 00:14:49.435597 | orchestrator | OS/Arch: linux/amd64 2026-04-04 00:14:49.435608 | orchestrator | Experimental: false 2026-04-04 00:14:49.435619 | orchestrator | containerd: 2026-04-04 00:14:49.435630 | orchestrator | Version: v2.2.2 2026-04-04 00:14:49.435641 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-04-04 00:14:49.435652 | orchestrator | runc: 2026-04-04 00:14:49.435663 | orchestrator | Version: 1.3.4 2026-04-04 00:14:49.435674 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-04-04 00:14:49.435685 | orchestrator | docker-init: 2026-04-04 00:14:49.435696 | orchestrator | Version: 0.19.0 2026-04-04 00:14:49.435708 | orchestrator | GitCommit: de40ad0 2026-04-04 00:14:49.438911 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-04 00:14:49.447844 | orchestrator | + set -e 2026-04-04 00:14:49.447933 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:14:49.447949 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:14:49.447963 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:14:49.447975 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:14:49.447986 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:14:49.447997 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:14:49.448045 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:14:49.448059 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:14:49.448070 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:14:49.448091 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 00:14:49.448102 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 00:14:49.448113 | orchestrator | ++ export ARA=false 2026-04-04 00:14:49.448124 | orchestrator | ++ ARA=false 2026-04-04 00:14:49.448183 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:14:49.448221 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:14:49.448236 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:14:49.448246 | orchestrator | ++ TEMPEST=true 2026-04-04 00:14:49.448257 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:14:49.448268 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:14:49.448279 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:14:49.448290 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:14:49.448301 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:14:49.448312 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:14:49.448323 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:14:49.448334 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:14:49.448350 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:14:49.448361 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:14:49.448373 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:14:49.448384 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:14:49.448395 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:14:49.448406 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:14:49.448417 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:14:49.448427 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:14:49.448442 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:14:49.448747 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:14:49.448766 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:14:49.448777 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-04-04 00:14:49.456444 | orchestrator | + set -e 2026-04-04 00:14:49.457050 | orchestrator | + VERSION=reef 2026-04-04 00:14:49.457364 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:14:49.463684 | orchestrator | + [[ -n ceph_version: reef ]] 2026-04-04 00:14:49.463711 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:14:49.468563 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-04-04 00:14:49.475289 | orchestrator | + set -e 2026-04-04 00:14:49.475348 | orchestrator | + VERSION=2024.2 2026-04-04 00:14:49.476461 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:14:49.479856 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-04-04 00:14:49.479897 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-04-04 00:14:49.485274 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-04 00:14:49.486246 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:14:49.544521 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:14:49.544639 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:14:49.544667 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-04 00:14:49.545182 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 00:14:49.600086 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:14:49.600523 | orchestrator | ++ semver 2024.2 2025.1 2026-04-04 00:14:49.656725 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:14:49.656840 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-04 00:14:49.749568 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-04 00:14:49.750886 | orchestrator | + source /opt/venv/bin/activate 2026-04-04 00:14:49.751889 | orchestrator | ++ deactivate nondestructive 2026-04-04 00:14:49.751932 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:14:49.751944 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:14:49.751991 | orchestrator | ++ hash -r 2026-04-04 00:14:49.752089 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:14:49.752103 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-04 00:14:49.752112 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-04 00:14:49.752156 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-04 00:14:49.752266 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-04 00:14:49.752287 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-04 00:14:49.752297 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-04 00:14:49.752483 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-04 00:14:49.752497 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:14:49.752521 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:14:49.752531 | orchestrator | ++ export PATH 2026-04-04 00:14:49.752721 | orchestrator | ++ '[' -n '' ']' 2026-04-04 00:14:49.752755 | orchestrator | ++ '[' -z '' ']' 2026-04-04 00:14:49.752764 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-04 00:14:49.752773 | orchestrator | ++ PS1='(venv) ' 2026-04-04 00:14:49.752837 | orchestrator | ++ export PS1 2026-04-04 00:14:49.752903 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-04 00:14:49.752915 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-04 00:14:49.752925 | orchestrator | ++ hash -r 2026-04-04 00:14:49.752993 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-04 00:14:50.869835 | orchestrator | 2026-04-04 00:14:50.869934 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-04 00:14:50.869953 | orchestrator | 2026-04-04 00:14:50.869967 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:14:51.430891 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:51.430992 | orchestrator | 2026-04-04 00:14:51.431009 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-04 00:14:52.413064 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:52.413189 | orchestrator | 2026-04-04 00:14:52.413207 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-04 00:14:52.413220 | orchestrator | 2026-04-04 00:14:52.413232 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:14:54.797790 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:54.797897 | orchestrator | 2026-04-04 00:14:54.797914 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-04 00:14:54.851332 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:54.851435 | orchestrator | 2026-04-04 00:14:54.851455 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-04 00:14:55.324496 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:55.324597 | orchestrator | 2026-04-04 00:14:55.324613 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-04 00:14:55.364594 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:14:55.364685 | orchestrator | 2026-04-04 00:14:55.364700 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-04 00:14:55.711511 | orchestrator | changed: [testbed-manager] 2026-04-04 00:14:55.711585 | orchestrator | 2026-04-04 00:14:55.711597 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-04 00:14:56.040037 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:56.040298 | orchestrator | 2026-04-04 00:14:56.040321 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-04 00:14:56.140297 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:14:56.140386 | orchestrator | 2026-04-04 00:14:56.140402 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-04 00:14:56.140442 | orchestrator | 2026-04-04 00:14:56.140455 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:14:57.879451 | orchestrator | ok: [testbed-manager] 2026-04-04 00:14:57.879550 | orchestrator | 2026-04-04 00:14:57.879567 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-04 00:14:57.976200 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-04 00:14:57.976288 | orchestrator | 2026-04-04 00:14:57.976304 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-04 00:14:58.031271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-04 00:14:58.031345 | orchestrator | 2026-04-04 00:14:58.031354 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-04 00:14:59.129652 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-04 00:14:59.129767 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-04 00:14:59.129791 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-04 00:14:59.129809 | orchestrator | 2026-04-04 00:14:59.129828 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-04 00:15:00.916369 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-04 00:15:00.916473 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-04 00:15:00.916489 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-04 00:15:00.916502 | orchestrator | 2026-04-04 00:15:00.916516 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-04 00:15:01.557397 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:15:01.557497 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:01.557515 | orchestrator | 2026-04-04 00:15:01.557528 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-04 00:15:02.185056 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:15:02.185203 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:02.185232 | orchestrator | 2026-04-04 00:15:02.185254 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-04 00:15:02.240988 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:02.241067 | orchestrator | 2026-04-04 00:15:02.241079 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-04 00:15:02.599642 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:02.599760 | orchestrator | 2026-04-04 00:15:02.599778 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-04 00:15:02.670230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-04 00:15:02.670316 | orchestrator | 2026-04-04 00:15:02.670331 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-04 00:15:03.795088 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:03.795200 | orchestrator | 2026-04-04 00:15:03.795216 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-04 00:15:04.594232 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:04.594340 | orchestrator | 2026-04-04 00:15:04.595225 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-04 00:15:15.441683 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:15.441797 | orchestrator | 2026-04-04 00:15:15.441836 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-04 00:15:15.489543 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:15.489630 | orchestrator | 2026-04-04 00:15:15.489644 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-04 00:15:15.489656 | orchestrator | 2026-04-04 00:15:15.489666 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:15:17.313959 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:17.314061 | orchestrator | 2026-04-04 00:15:17.314090 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-04 00:15:17.416533 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-04 00:15:17.416607 | orchestrator | 2026-04-04 00:15:17.416616 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-04 00:15:17.469524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:15:17.469607 | orchestrator | 2026-04-04 00:15:17.469620 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-04 00:15:19.794567 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:19.794666 | orchestrator | 2026-04-04 00:15:19.794684 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-04 00:15:19.847504 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:19.847603 | orchestrator | 2026-04-04 00:15:19.847619 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-04 00:15:19.972475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-04 00:15:19.972563 | orchestrator | 2026-04-04 00:15:19.972580 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-04 00:15:22.784964 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-04 00:15:22.785071 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-04 00:15:22.785086 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-04 00:15:22.785099 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-04 00:15:22.785110 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-04 00:15:22.785121 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-04 00:15:22.785132 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-04 00:15:22.785143 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-04 00:15:22.785200 | orchestrator | 2026-04-04 00:15:22.785214 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-04 00:15:23.396420 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:23.396541 | orchestrator | 2026-04-04 00:15:23.396571 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-04 00:15:24.033819 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:24.033907 | orchestrator | 2026-04-04 00:15:24.033921 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-04 00:15:24.104798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-04 00:15:24.104889 | orchestrator | 2026-04-04 00:15:24.104906 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-04 00:15:25.306110 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-04 00:15:25.306310 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-04 00:15:25.306333 | orchestrator | 2026-04-04 00:15:25.306354 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-04 00:15:25.942279 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:25.942368 | orchestrator | 2026-04-04 00:15:25.942384 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-04 00:15:26.000005 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:26.000089 | orchestrator | 2026-04-04 00:15:26.000108 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-04 00:15:26.073546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-04 00:15:26.073617 | orchestrator | 2026-04-04 00:15:26.073627 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-04 00:15:26.691228 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:26.691319 | orchestrator | 2026-04-04 00:15:26.691337 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-04 00:15:26.759656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-04 00:15:26.759768 | orchestrator | 2026-04-04 00:15:26.759784 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-04 00:15:28.155861 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:15:28.155984 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:15:28.156013 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:28.156036 | orchestrator | 2026-04-04 00:15:28.156059 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-04 00:15:28.775611 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:28.775696 | orchestrator | 2026-04-04 00:15:28.775713 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-04 00:15:28.831600 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:28.831689 | orchestrator | 2026-04-04 00:15:28.831705 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-04 00:15:28.932212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-04 00:15:28.932302 | orchestrator | 2026-04-04 00:15:28.932318 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-04 00:15:29.451229 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:29.451300 | orchestrator | 2026-04-04 00:15:29.451322 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-04 00:15:29.843952 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:29.844048 | orchestrator | 2026-04-04 00:15:29.844065 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-04 00:15:31.073010 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-04 00:15:31.073131 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-04 00:15:31.073148 | orchestrator | 2026-04-04 00:15:31.073209 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-04 00:15:31.719617 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:31.719726 | orchestrator | 2026-04-04 00:15:31.719744 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-04 00:15:32.098286 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:32.098371 | orchestrator | 2026-04-04 00:15:32.098385 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-04 00:15:32.443465 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:32.443554 | orchestrator | 2026-04-04 00:15:32.443572 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-04 00:15:32.496245 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:32.496339 | orchestrator | 2026-04-04 00:15:32.496356 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-04 00:15:32.570780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-04 00:15:32.570869 | orchestrator | 2026-04-04 00:15:32.570884 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-04 00:15:32.614369 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:32.614449 | orchestrator | 2026-04-04 00:15:32.614463 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-04 00:15:34.592716 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-04 00:15:34.592821 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-04 00:15:34.592837 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-04 00:15:34.592849 | orchestrator | 2026-04-04 00:15:34.592862 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-04 00:15:35.275697 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:35.275794 | orchestrator | 2026-04-04 00:15:35.275812 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-04 00:15:35.971967 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:35.972074 | orchestrator | 2026-04-04 00:15:35.972099 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-04 00:15:36.666720 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:36.666804 | orchestrator | 2026-04-04 00:15:36.666818 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-04 00:15:36.729137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-04 00:15:36.729269 | orchestrator | 2026-04-04 00:15:36.729286 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-04 00:15:36.767662 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:36.767756 | orchestrator | 2026-04-04 00:15:36.767772 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-04 00:15:37.463571 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-04 00:15:37.463667 | orchestrator | 2026-04-04 00:15:37.463683 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-04 00:15:37.541984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-04 00:15:37.542119 | orchestrator | 2026-04-04 00:15:37.542135 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-04 00:15:38.235358 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:38.235460 | orchestrator | 2026-04-04 00:15:38.235477 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-04 00:15:38.853975 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:38.854124 | orchestrator | 2026-04-04 00:15:38.854143 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-04 00:15:38.911558 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:15:38.911647 | orchestrator | 2026-04-04 00:15:38.911663 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-04 00:15:38.958282 | orchestrator | ok: [testbed-manager] 2026-04-04 00:15:38.958375 | orchestrator | 2026-04-04 00:15:38.958391 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-04 00:15:39.824384 | orchestrator | changed: [testbed-manager] 2026-04-04 00:15:39.824479 | orchestrator | 2026-04-04 00:15:39.824495 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-04 00:16:50.442975 | orchestrator | changed: [testbed-manager] 2026-04-04 00:16:50.443184 | orchestrator | 2026-04-04 00:16:50.443206 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-04 00:16:51.433300 | orchestrator | ok: [testbed-manager] 2026-04-04 00:16:51.433390 | orchestrator | 2026-04-04 00:16:51.433403 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-04 00:16:51.491274 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:16:51.491378 | orchestrator | 2026-04-04 00:16:51.491395 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-04 00:16:53.847955 | orchestrator | changed: [testbed-manager] 2026-04-04 00:16:53.848059 | orchestrator | 2026-04-04 00:16:53.848076 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-04 00:16:53.955192 | orchestrator | ok: [testbed-manager] 2026-04-04 00:16:53.955327 | orchestrator | 2026-04-04 00:16:53.955366 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-04 00:16:53.955380 | orchestrator | 2026-04-04 00:16:53.955392 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-04 00:16:53.998542 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:16:53.998620 | orchestrator | 2026-04-04 00:16:53.998630 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-04 00:17:54.051165 | orchestrator | Pausing for 60 seconds 2026-04-04 00:17:54.051410 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:54.051446 | orchestrator | 2026-04-04 00:17:54.051468 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-04 00:17:57.023491 | orchestrator | changed: [testbed-manager] 2026-04-04 00:17:57.023568 | orchestrator | 2026-04-04 00:17:57.023580 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-04 00:18:59.000324 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-04 00:18:59.000434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-04 00:18:59.000450 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-04 00:18:59.000487 | orchestrator | changed: [testbed-manager] 2026-04-04 00:18:59.000500 | orchestrator | 2026-04-04 00:18:59.000511 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-04 00:19:05.042750 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:05.042846 | orchestrator | 2026-04-04 00:19:05.042858 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-04 00:19:05.121019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-04 00:19:05.121137 | orchestrator | 2026-04-04 00:19:05.121154 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-04 00:19:05.121168 | orchestrator | 2026-04-04 00:19:05.121180 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-04 00:19:05.172939 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:05.173037 | orchestrator | 2026-04-04 00:19:05.173053 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-04 00:19:05.241694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-04 00:19:05.241789 | orchestrator | 2026-04-04 00:19:05.241805 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-04 00:19:06.073384 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:06.073522 | orchestrator | 2026-04-04 00:19:06.073541 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-04 00:19:09.302867 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:09.302990 | orchestrator | 2026-04-04 00:19:09.303006 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-04 00:19:09.363595 | orchestrator | ok: [testbed-manager] => { 2026-04-04 00:19:09.363703 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-04 00:19:09.363739 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-04 00:19:09.363754 | orchestrator | "Checking running containers against expected versions...", 2026-04-04 00:19:09.363767 | orchestrator | "", 2026-04-04 00:19:09.363780 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-04 00:19:09.363791 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-04 00:19:09.363802 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.363813 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-04-04 00:19:09.363825 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.363836 | orchestrator | "", 2026-04-04 00:19:09.363847 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-04 00:19:09.363872 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-04-04 00:19:09.363884 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.363895 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-04-04 00:19:09.363906 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.363917 | orchestrator | "", 2026-04-04 00:19:09.363928 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-04 00:19:09.363939 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-04 00:19:09.363950 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.363961 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-04-04 00:19:09.363972 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.363983 | orchestrator | "", 2026-04-04 00:19:09.363994 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-04 00:19:09.364006 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-04 00:19:09.364017 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364028 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-04-04 00:19:09.364039 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364050 | orchestrator | "", 2026-04-04 00:19:09.364061 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-04 00:19:09.364098 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-04 00:19:09.364110 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364121 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-04-04 00:19:09.364135 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364147 | orchestrator | "", 2026-04-04 00:19:09.364161 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-04 00:19:09.364174 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364188 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364200 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364213 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364226 | orchestrator | "", 2026-04-04 00:19:09.364239 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-04 00:19:09.364252 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-04 00:19:09.364265 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364278 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-04 00:19:09.364314 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364327 | orchestrator | "", 2026-04-04 00:19:09.364339 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-04 00:19:09.364352 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-04 00:19:09.364365 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364377 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-04 00:19:09.364399 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364413 | orchestrator | "", 2026-04-04 00:19:09.364426 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-04 00:19:09.364444 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-04-04 00:19:09.364457 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364471 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-04-04 00:19:09.364484 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364495 | orchestrator | "", 2026-04-04 00:19:09.364506 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-04 00:19:09.364517 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-04 00:19:09.364527 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364538 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-04 00:19:09.364549 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364560 | orchestrator | "", 2026-04-04 00:19:09.364571 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-04 00:19:09.364582 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364593 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364604 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364615 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364626 | orchestrator | "", 2026-04-04 00:19:09.364637 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-04 00:19:09.364648 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364659 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364670 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364681 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364691 | orchestrator | "", 2026-04-04 00:19:09.364702 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-04 00:19:09.364713 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364724 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364735 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364746 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364756 | orchestrator | "", 2026-04-04 00:19:09.364767 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-04 00:19:09.364778 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364789 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364807 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364818 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364829 | orchestrator | "", 2026-04-04 00:19:09.364840 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-04 00:19:09.364870 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364882 | orchestrator | " Enabled: true", 2026-04-04 00:19:09.364893 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-04-04 00:19:09.364903 | orchestrator | " Status: ✅ MATCH", 2026-04-04 00:19:09.364914 | orchestrator | "", 2026-04-04 00:19:09.364925 | orchestrator | "=== Summary ===", 2026-04-04 00:19:09.364935 | orchestrator | "Errors (version mismatches): 0", 2026-04-04 00:19:09.364946 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-04 00:19:09.364957 | orchestrator | "", 2026-04-04 00:19:09.364968 | orchestrator | "✅ All running containers match expected versions!" 2026-04-04 00:19:09.364979 | orchestrator | ] 2026-04-04 00:19:09.364990 | orchestrator | } 2026-04-04 00:19:09.365002 | orchestrator | 2026-04-04 00:19:09.365014 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-04 00:19:09.416130 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:09.416200 | orchestrator | 2026-04-04 00:19:09.416214 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:19:09.416226 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-04 00:19:09.416239 | orchestrator | 2026-04-04 00:19:09.513208 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-04 00:19:09.513340 | orchestrator | + deactivate 2026-04-04 00:19:09.513368 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-04 00:19:09.513390 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-04 00:19:09.513410 | orchestrator | + export PATH 2026-04-04 00:19:09.513429 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-04 00:19:09.513447 | orchestrator | + '[' -n '' ']' 2026-04-04 00:19:09.513464 | orchestrator | + hash -r 2026-04-04 00:19:09.513481 | orchestrator | + '[' -n '' ']' 2026-04-04 00:19:09.513500 | orchestrator | + unset VIRTUAL_ENV 2026-04-04 00:19:09.513519 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-04 00:19:09.513538 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-04 00:19:09.513556 | orchestrator | + unset -f deactivate 2026-04-04 00:19:09.513575 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-04 00:19:09.520238 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 00:19:09.520364 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-04 00:19:09.520382 | orchestrator | + local max_attempts=60 2026-04-04 00:19:09.520405 | orchestrator | + local name=ceph-ansible 2026-04-04 00:19:09.520432 | orchestrator | + local attempt_num=1 2026-04-04 00:19:09.521239 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:19:09.558575 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:19:09.558661 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-04 00:19:09.558675 | orchestrator | + local max_attempts=60 2026-04-04 00:19:09.558687 | orchestrator | + local name=kolla-ansible 2026-04-04 00:19:09.558699 | orchestrator | + local attempt_num=1 2026-04-04 00:19:09.559431 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-04 00:19:09.605760 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:19:09.605839 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-04 00:19:09.605853 | orchestrator | + local max_attempts=60 2026-04-04 00:19:09.605865 | orchestrator | + local name=osism-ansible 2026-04-04 00:19:09.605876 | orchestrator | + local attempt_num=1 2026-04-04 00:19:09.607110 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-04 00:19:09.646351 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:19:09.646442 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-04 00:19:09.646456 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-04 00:19:10.318626 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-04 00:19:10.494474 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-04 00:19:10.494603 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-04-04 00:19:10.494620 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-04-04 00:19:10.494631 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-04 00:19:10.494645 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-04-04 00:19:10.494656 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-04 00:19:10.494667 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-04 00:19:10.494678 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-04 00:19:10.494707 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-04-04 00:19:10.494718 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-04-04 00:19:10.494729 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-04 00:19:10.494740 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-04-04 00:19:10.494750 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-04-04 00:19:10.494761 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-04 00:19:10.494772 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-04-04 00:19:10.494783 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-04 00:19:10.501655 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:19:10.550587 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:19:10.550677 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:19:10.550692 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-04 00:19:10.555318 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-04 00:19:23.073221 | orchestrator | 2026-04-04 00:19:23 | INFO  | Prepare task for execution of resolvconf. 2026-04-04 00:19:23.294591 | orchestrator | 2026-04-04 00:19:23 | INFO  | Task b74e323e-07c0-4974-9541-bfbe933bbab5 (resolvconf) was prepared for execution. 2026-04-04 00:19:23.294747 | orchestrator | 2026-04-04 00:19:23 | INFO  | It takes a moment until task b74e323e-07c0-4974-9541-bfbe933bbab5 (resolvconf) has been started and output is visible here. 2026-04-04 00:19:36.668983 | orchestrator | 2026-04-04 00:19:36.669118 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-04 00:19:36.669139 | orchestrator | 2026-04-04 00:19:36.669152 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:19:36.669164 | orchestrator | Saturday 04 April 2026 00:19:26 +0000 (0:00:00.172) 0:00:00.172 ******** 2026-04-04 00:19:36.669175 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:36.669192 | orchestrator | 2026-04-04 00:19:36.669210 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-04 00:19:36.669229 | orchestrator | Saturday 04 April 2026 00:19:29 +0000 (0:00:03.579) 0:00:03.751 ******** 2026-04-04 00:19:36.669247 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:36.669268 | orchestrator | 2026-04-04 00:19:36.669286 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-04 00:19:36.669371 | orchestrator | Saturday 04 April 2026 00:19:30 +0000 (0:00:00.065) 0:00:03.817 ******** 2026-04-04 00:19:36.669396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-04 00:19:36.669415 | orchestrator | 2026-04-04 00:19:36.669434 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-04 00:19:36.669454 | orchestrator | Saturday 04 April 2026 00:19:30 +0000 (0:00:00.083) 0:00:03.901 ******** 2026-04-04 00:19:36.669472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:19:36.669492 | orchestrator | 2026-04-04 00:19:36.669527 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-04 00:19:36.669547 | orchestrator | Saturday 04 April 2026 00:19:30 +0000 (0:00:00.075) 0:00:03.976 ******** 2026-04-04 00:19:36.669566 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:36.669580 | orchestrator | 2026-04-04 00:19:36.669593 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-04 00:19:36.669606 | orchestrator | Saturday 04 April 2026 00:19:31 +0000 (0:00:01.140) 0:00:05.117 ******** 2026-04-04 00:19:36.669619 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:36.669636 | orchestrator | 2026-04-04 00:19:36.669656 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-04 00:19:36.669674 | orchestrator | Saturday 04 April 2026 00:19:31 +0000 (0:00:00.049) 0:00:05.167 ******** 2026-04-04 00:19:36.669693 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:36.669711 | orchestrator | 2026-04-04 00:19:36.669726 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-04 00:19:36.669738 | orchestrator | Saturday 04 April 2026 00:19:32 +0000 (0:00:01.544) 0:00:06.712 ******** 2026-04-04 00:19:36.669752 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:36.669771 | orchestrator | 2026-04-04 00:19:36.669789 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-04 00:19:36.669808 | orchestrator | Saturday 04 April 2026 00:19:32 +0000 (0:00:00.061) 0:00:06.773 ******** 2026-04-04 00:19:36.669826 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:36.669844 | orchestrator | 2026-04-04 00:19:36.669862 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-04 00:19:36.669883 | orchestrator | Saturday 04 April 2026 00:19:33 +0000 (0:00:00.490) 0:00:07.264 ******** 2026-04-04 00:19:36.669902 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:36.669920 | orchestrator | 2026-04-04 00:19:36.669939 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-04 00:19:36.669957 | orchestrator | Saturday 04 April 2026 00:19:34 +0000 (0:00:01.030) 0:00:08.294 ******** 2026-04-04 00:19:36.669975 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:36.670072 | orchestrator | 2026-04-04 00:19:36.670097 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-04 00:19:36.670117 | orchestrator | Saturday 04 April 2026 00:19:35 +0000 (0:00:00.908) 0:00:09.202 ******** 2026-04-04 00:19:36.670137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-04 00:19:36.670157 | orchestrator | 2026-04-04 00:19:36.670178 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-04 00:19:36.670198 | orchestrator | Saturday 04 April 2026 00:19:35 +0000 (0:00:00.067) 0:00:09.270 ******** 2026-04-04 00:19:36.670217 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:36.670228 | orchestrator | 2026-04-04 00:19:36.670240 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:19:36.670252 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:19:36.670268 | orchestrator | 2026-04-04 00:19:36.670285 | orchestrator | 2026-04-04 00:19:36.670302 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:19:36.670373 | orchestrator | Saturday 04 April 2026 00:19:36 +0000 (0:00:01.061) 0:00:10.332 ******** 2026-04-04 00:19:36.670392 | orchestrator | =============================================================================== 2026-04-04 00:19:36.670410 | orchestrator | Gathering Facts --------------------------------------------------------- 3.58s 2026-04-04 00:19:36.670428 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.54s 2026-04-04 00:19:36.670447 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2026-04-04 00:19:36.670465 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.06s 2026-04-04 00:19:36.670476 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2026-04-04 00:19:36.670488 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.91s 2026-04-04 00:19:36.670521 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.49s 2026-04-04 00:19:36.670533 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-04 00:19:36.670544 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-04-04 00:19:36.670554 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-04-04 00:19:36.670565 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-04-04 00:19:36.670575 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.06s 2026-04-04 00:19:36.670586 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-04 00:19:36.793251 | orchestrator | + osism apply sshconfig 2026-04-04 00:19:47.963402 | orchestrator | 2026-04-04 00:19:47 | INFO  | Prepare task for execution of sshconfig. 2026-04-04 00:19:48.033810 | orchestrator | 2026-04-04 00:19:48 | INFO  | Task 8caca958-0cc1-48f2-8675-5e0fde8d1219 (sshconfig) was prepared for execution. 2026-04-04 00:19:48.033924 | orchestrator | 2026-04-04 00:19:48 | INFO  | It takes a moment until task 8caca958-0cc1-48f2-8675-5e0fde8d1219 (sshconfig) has been started and output is visible here. 2026-04-04 00:19:58.805661 | orchestrator | 2026-04-04 00:19:58.805752 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-04 00:19:58.805762 | orchestrator | 2026-04-04 00:19:58.805769 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-04 00:19:58.805776 | orchestrator | Saturday 04 April 2026 00:19:51 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-04-04 00:19:58.805782 | orchestrator | ok: [testbed-manager] 2026-04-04 00:19:58.805789 | orchestrator | 2026-04-04 00:19:58.805795 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-04 00:19:58.805823 | orchestrator | Saturday 04 April 2026 00:19:52 +0000 (0:00:00.874) 0:00:01.061 ******** 2026-04-04 00:19:58.805829 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:58.805836 | orchestrator | 2026-04-04 00:19:58.805841 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-04 00:19:58.805848 | orchestrator | Saturday 04 April 2026 00:19:52 +0000 (0:00:00.529) 0:00:01.590 ******** 2026-04-04 00:19:58.805854 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:19:58.805860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:19:58.805866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:19:58.805872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:19:58.805878 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:19:58.805884 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:19:58.805889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:19:58.805895 | orchestrator | 2026-04-04 00:19:58.805901 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-04 00:19:58.805907 | orchestrator | Saturday 04 April 2026 00:19:58 +0000 (0:00:05.554) 0:00:07.145 ******** 2026-04-04 00:19:58.805913 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:19:58.805919 | orchestrator | 2026-04-04 00:19:58.805925 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-04 00:19:58.805931 | orchestrator | Saturday 04 April 2026 00:19:58 +0000 (0:00:00.104) 0:00:07.249 ******** 2026-04-04 00:19:58.805936 | orchestrator | changed: [testbed-manager] 2026-04-04 00:19:58.805943 | orchestrator | 2026-04-04 00:19:58.805949 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:19:58.805956 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:19:58.805963 | orchestrator | 2026-04-04 00:19:58.805969 | orchestrator | 2026-04-04 00:19:58.805975 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:19:58.805981 | orchestrator | Saturday 04 April 2026 00:19:58 +0000 (0:00:00.475) 0:00:07.725 ******** 2026-04-04 00:19:58.805987 | orchestrator | =============================================================================== 2026-04-04 00:19:58.805993 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.56s 2026-04-04 00:19:58.805999 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.87s 2026-04-04 00:19:58.806004 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-04-04 00:19:58.806010 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.48s 2026-04-04 00:19:58.806060 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.10s 2026-04-04 00:19:58.930755 | orchestrator | + osism apply known-hosts 2026-04-04 00:20:10.142448 | orchestrator | 2026-04-04 00:20:10 | INFO  | Prepare task for execution of known-hosts. 2026-04-04 00:20:10.215985 | orchestrator | 2026-04-04 00:20:10 | INFO  | Task ef47ea07-1553-4d35-b0f0-bd3486c1f37f (known-hosts) was prepared for execution. 2026-04-04 00:20:10.216068 | orchestrator | 2026-04-04 00:20:10 | INFO  | It takes a moment until task ef47ea07-1553-4d35-b0f0-bd3486c1f37f (known-hosts) has been started and output is visible here. 2026-04-04 00:20:24.712473 | orchestrator | 2026-04-04 00:20:24.712584 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-04 00:20:24.712601 | orchestrator | 2026-04-04 00:20:24.712615 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-04 00:20:24.712627 | orchestrator | Saturday 04 April 2026 00:20:13 +0000 (0:00:00.175) 0:00:00.175 ******** 2026-04-04 00:20:24.712639 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:20:24.712650 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:20:24.712685 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:20:24.712697 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:20:24.712707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:20:24.712718 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:20:24.712729 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:20:24.712739 | orchestrator | 2026-04-04 00:20:24.712750 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-04 00:20:24.712763 | orchestrator | Saturday 04 April 2026 00:20:19 +0000 (0:00:06.057) 0:00:06.233 ******** 2026-04-04 00:20:24.712787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-04 00:20:24.712801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-04 00:20:24.712812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-04 00:20:24.712822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-04 00:20:24.712833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-04 00:20:24.712844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-04 00:20:24.712855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-04 00:20:24.712865 | orchestrator | 2026-04-04 00:20:24.712876 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.712888 | orchestrator | Saturday 04 April 2026 00:20:19 +0000 (0:00:00.141) 0:00:06.375 ******** 2026-04-04 00:20:24.712902 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqExUVVY0giKRqmtwnTTdj+lMVjI/4WhE4SaUex0RYuAT07bkGYzQ6wUYj6EAiTBWsjeeIQu69HrkPTH8lPFY=) 2026-04-04 00:20:24.712919 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqSRfO0AvcAiUGQLwuM7rUprq/iqrqpO0C2DbjEoVDQpmLZ20OzpVJiX0z32SQYLNeDG85v3BdTQkzi+3bYzM7WeIUQg2T2yj7AlbLHT7qW/WyBS2UrtVMCXToMfuLZku+hiYf4GRMK67zPLLL7jlZGR9ZKQa9NRKX71YgJcoTEKgV2GEVx2uiUhiJ03S78NN+37hYThn9Fc2qXSyxCXl4e4wJIhzqn91qrjHLFEJpcwpmDfE4SxgKZy9ncJDZyT2x8og+Ylmqrgf9ceKhtLYSHmg95X3IhIPpWN9bw/vbVkpZKDqhE3QY3uK73B6zH/XHWig/BJjAXXUqW+l+xMkz+guF/02D8iEORCaEIhz1Tx9U7Ga4VeMVuUQkkY6X6wr+myf2KMv0v1m3d/JOeyu6dbsyk3x3ic4aYr7wSOZxvTEMDz7xnFTUeIba1mR9/ZYdzvbopL1mwFd8zbapkH80IIf/dKv5DBo4QMlu+MkW/MYQWcYfNDe1owhhFgubTb0=) 2026-04-04 00:20:24.712936 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHb8qDKPQN5HMUzqj5kYmx9mz5tD0OfMYPef7bjLeGD) 2026-04-04 00:20:24.712951 | orchestrator | 2026-04-04 00:20:24.712964 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.712976 | orchestrator | Saturday 04 April 2026 00:20:20 +0000 (0:00:01.104) 0:00:07.479 ******** 2026-04-04 00:20:24.712989 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHQmuLdB6oFW4aaHb7vAvd52/uHaALLpmUpzq3+Bd1hhC+8eEeBn2OTKGydS7LY1mTt6eubeztcF294Aa3wrOIg=) 2026-04-04 00:20:24.713078 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt4rGtsaQHLFwYnI+ZwkKVrv0EunKVBnlsKhnAwk7Q/DGE8cs/NaN1x+2uW4QlVYx7pxp0mJNuwVBNJUJxMcv3n2RRhDegHjxTPcN94xqoEx/O3K2O0I3qiK4MuU8x8qi9QT2rV/q7rw07lnIEBaFWbVUymtSdyMUbJPfhIFzij3LAYxE5J5dVKJ3uqsWjbUtw88DiHkFW4xrd4bb9ZSVwQPMZu6D5OqfaTlSun9d3u5uHDtaWfIZ9C25grCiHntmIwLBZN2nqEem9qlMTUx5yW8nrbjPFEaomHW+iffOF37dIwec3Ct/iULFwjK4tTb6I9Fbynpu+8ipqDZ7VzJ4OVjP0/i7t0JwiD5qJxHfWXKxhU3zwU7Jd6tzTv4GD/uyLdu4XD5f/fXwplOWgMTGCX6yjWj2UdccMa7h/Wc+AWoqvg4bkXB6KS1diXpwUs3Mk5e616zNiDv3P8F4dAXAhQDw0mb/QuUJexl/eJ65vDY7t+tn35a4ZfRdVTM7Gdis=) 2026-04-04 00:20:24.713096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9C/xZ/qd9B8zPLUS5JkcBYmjUctPFHCh4VqZWhV2Pi) 2026-04-04 00:20:24.713109 | orchestrator | 2026-04-04 00:20:24.713122 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.713135 | orchestrator | Saturday 04 April 2026 00:20:21 +0000 (0:00:00.919) 0:00:08.399 ******** 2026-04-04 00:20:24.713147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfS+ZkiJkowqnPIG3QkV8Qe4wIiSjuJAHNlmAD/pGVhEjdZbbaQFhhboWgMeFuLcTkLiVxBJTbxRhyVH+vlXT8=) 2026-04-04 00:20:24.713160 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwGkOGtvcdY0tKBtfwAutUWXLfJyo6CiEphmGG8es2p7cGwst7lFkCe1MjxSFbCjffHp7T01vFlaCuKOAdPRIPQG4todp9FqK4tL2UhtLQvqKwLVCzyzfOnoVIzl8JDNhQWdW9C//3h9ywrsoG7MHcD42JObq6xhwxeuTZ3sDmY/DYkFOwyn8cDDODdUIxKAhZGLANRXMbx9+Plwp5St9J76yEuHYFnb25/czLaTBByNQIQmSE7lFvKM0Qucm/eggPsKQJIxowuvn97ZDb5fhwbjBo06LWUh4lYFGcOoDOpspRMA496t1FdFqmzzNVkiFLYPnzMmdol99aB1aFDzmQW0ww5FQG/odi86/uvpUUJpPVDhDhzkJiAAqjRWGHnpQXzlWpMSYDZSzmuYc53C88LV3pBYtaPb2WQEjv5KOeCO0OgGfbSZyKK1ob3NucXQhPOSHgb5oTC9tRkDezr901VKmENH7jCvsfsiJ8W3vfyIBbI5EkpL9DcSfXSFDJMe0=) 2026-04-04 00:20:24.713246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDSHGVRsYvW7RNPb1HFmkhZImuqCtD/SuocbHyWC7VF) 2026-04-04 00:20:24.713260 | orchestrator | 2026-04-04 00:20:24.713272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.713285 | orchestrator | Saturday 04 April 2026 00:20:22 +0000 (0:00:00.953) 0:00:09.353 ******** 2026-04-04 00:20:24.713301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+9wJEbC1BzOBByAQjQIdv+copj0C/vmp6wEcSZ1Vj2RZ57CwZRnVSBGPNeb9eVHtLPF24Jf/1HkKk/ZRb3VdQ=) 2026-04-04 00:20:24.713314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC38NqwTL5V5GtexWa365d5MUJq9ctX9xYUl2Ko8g07RXJledsdZukxJrttK31bl1NYzuG8GUN3ItxJPr9cCVAdAgMFv6lU4As45IYumKVAERby0FUyoPDlY4w95mvbYG6axCMqNQ2rTQ+RnDJ56Ra+CxVPdsVFkbunscelekkrEoLTB9M7TOpwX+KidTm/991cxXAwjFHkKjP2M0Itu+z3wbY2PYXRkPum/BHSGs75AI3XzPx70iwWrRUhqi7O59DrClzcaBazKuaz8kLiBxOrhuExJ+EVZQIqhvO+MktQOv1UM1sI6C5VatyoKEdwjimr+y99f1kZG8hsNSbRG2CYvXs2a4N4ZsdYnMSEzqQW1nbbVB4KctpKgi6ih+TN8FtWSbrVPcmU7CJqPzvHAnmKSv/aJh8zlSW1rXaj77Gs8DzxxujeIpcKTjaR0/0Z7GalvR2sXeJ6v7aGGEfd4YZU6ZWBPvQwgY3s44e0k6FP7t2TJGaVWF9rIp2veM0BX60=) 2026-04-04 00:20:24.713351 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJCX+9RlGUC8IBpzCCLkkpLfYUeBJrEqYpNBbiXPtyRw) 2026-04-04 00:20:24.713371 | orchestrator | 2026-04-04 00:20:24.713390 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.713410 | orchestrator | Saturday 04 April 2026 00:20:23 +0000 (0:00:00.954) 0:00:10.308 ******** 2026-04-04 00:20:24.713429 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkXSbQ0sTDxWOtBRWBb12Efxr5mOJHh3eUdLkWcgC/H3YswbVF72LCXR1tYQ6OmanUMJ3dDkuF+SOA/5BBxAqJYOUseUyz4QQQ/aQ2gnRopxYk29KrXEqs4XRqSv/OYY7VUakzQTbzXhmBblKtRQijdpvCKT0s65JgQzA/kgwbozngdfkWzYL67oDf/bjGdX/twPGbXT3H4tNBdiMtrmmY2NWsGyc2SxSJ1tHqgjb2l8Y3ELHJKBGfLtkf5E6Cm9dW52aAQc85Wvike7qE4IgLxpCV+bmzJlMp2UraCTCYZ9nlJIaQOS/zETEqxi0kHdy41yBALv4jhyOmB3R4n+8m7gO6poRc6E0Oo5qpPUucJShuHZKYAGjuChV1LAfN2WcVz/MtuoBgFrrEFldffybMq/Qn6BJ85TVK2inX0C9Glf4X75uPH9D2IW8uDbVLCOKF1/cIHdcZtZtcrI2gMVykLZagi/19ajlZ1sr+XEjxnaUOBjJYQDSzdmjQ7NZE0jk=) 2026-04-04 00:20:24.713456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe5fNFarGfTGMt4gybCebj0hI1oBbZ7jNXOG8kdIz2QZ/nLcp1EMAZWWwCQ1eLXnFAR+bOnN5nBPmQP33n85A8=) 2026-04-04 00:20:24.713467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGm6mOQCXUvp0Cpxg0E7h7KOv+w6hsk+nVOS3h/sOg2U) 2026-04-04 00:20:24.713478 | orchestrator | 2026-04-04 00:20:24.713490 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:24.713501 | orchestrator | Saturday 04 April 2026 00:20:24 +0000 (0:00:01.015) 0:00:11.324 ******** 2026-04-04 00:20:24.713521 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC78zDkHWnHWke1k1X+d/f/E0bm++f8DwvSPYqjRAWB3Ftx7wgx3XOEXE/gB9hMoD4LXuP+MvLt0WVohIJzfJ2U=) 2026-04-04 00:20:35.061639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuDBtEIljvP5iXmz0lovi5sdjKRKi0vdw44LT6M2w9NqYU7ptmizyLVAti7fG+zuz1A5x5OLV/PAts62LoDzcPzuJyuefklGHPsE48cyy3VCV8IGjtB66OtJ9tcI15a5A6d6H5RC3LdRATyZHFY9g/ElQHc4keMzXgTc4XXGXMXaQIAgMtjoiaagQI6VMlIHH3vDYzL0LF1Euk9GTIx+OQFTSBSeDqKRbjbcTlnmXbA0l5WEPVjMUydFbE/BTKlg+mxzva7sswYynN0MSVuGN8Ggz/HpC2FmY/nPNMubrccoOvsFKjbN9d6HeKC8MKJ7EWbFfw7Wsj2Pu+txoJJNWB92pH6xFnnd4sinsAdmYA11UuQAmOZlsZWmUDrYszHup8cF+WI9MxVqlkbSmRiuPyKc0gIkmJ2rf1ba5JDTtS6Gcs4bGHFQ7BlEcv4l5CfSsNzmzeydYp3aSNtK7rH9jewxrOcyyIHAz2udKm/EFV7X0yN4QLo5BNky2L7EtT3Oc=) 2026-04-04 00:20:35.061721 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhYobN+BO7NXuxnTWou/1k7JSAz9TmmAnBu1T9K4GW/) 2026-04-04 00:20:35.061729 | orchestrator | 2026-04-04 00:20:35.061735 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:35.061741 | orchestrator | Saturday 04 April 2026 00:20:25 +0000 (0:00:01.064) 0:00:12.388 ******** 2026-04-04 00:20:35.061745 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkan2QyZeHQiN6XxmLkZYoJU6Ply9+uu8GzUOLgQFZdOUG/wP9gCyGQtbyAbk11f0d2F30ZIZ+RLfCWInK0Me8e/e17fNTRljzYDCrFsZ1KfpYGoXYi1YdIFA5uvX0XRCMJz2omi1FCzbyDZayOSuqfGfNHNQ3EL5QaSvsEUNxpfIdw8E29H7m5xR3otYlp+lFDg26QNYJZaKfBAZ0iYD1WLHALGucTvVmvqRVL7HstHVm6S7etEnZdNX8/4WOG9wdl/qxO7509fB4kpfTizCQFqv45/4Ukj4u6S++i1kG12uaQogDkroHPJjbMelCav5Bx85qNeeZLQtcYHw1z3df5wt7rL4L68P3jKOm11i7sdwA9jw3SfjkPJPRW/eXIDpH2AbEsrQJuItnFgi8s5NqkjZ608nCSAaXvbawX5JU0IUhIAsvg/iki3AMc5QAkEomOif5PI9YQjRXdsCHut4smagwI2DA1oF0pSrQuBcpzfGl6upygi+O6KIWjFzneTs=) 2026-04-04 00:20:35.061750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJv3h05Z8BFL9lLFOnXeznqnF/DpHBdKFRkrSsJHWDMo0jTPE19p2ROuS9ryVgswJnj05ULyilvJWEcjIgvYNx0=) 2026-04-04 00:20:35.061756 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpq67xYbElC6AlEJD3nCki+AA1A947tNU4WWtZ/xig6) 2026-04-04 00:20:35.061760 | orchestrator | 2026-04-04 00:20:35.061765 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-04 00:20:35.061770 | orchestrator | Saturday 04 April 2026 00:20:26 +0000 (0:00:01.002) 0:00:13.390 ******** 2026-04-04 00:20:35.061775 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-04 00:20:35.061779 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-04 00:20:35.061783 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-04 00:20:35.061787 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-04 00:20:35.061791 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-04 00:20:35.061807 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-04 00:20:35.061824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-04 00:20:35.061828 | orchestrator | 2026-04-04 00:20:35.061832 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-04 00:20:35.061837 | orchestrator | Saturday 04 April 2026 00:20:31 +0000 (0:00:05.118) 0:00:18.509 ******** 2026-04-04 00:20:35.061842 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-04 00:20:35.061848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-04 00:20:35.061852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-04 00:20:35.061855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-04 00:20:35.061859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-04 00:20:35.061863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-04 00:20:35.061867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-04 00:20:35.061871 | orchestrator | 2026-04-04 00:20:35.061884 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:35.061888 | orchestrator | Saturday 04 April 2026 00:20:31 +0000 (0:00:00.150) 0:00:18.660 ******** 2026-04-04 00:20:35.061892 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqExUVVY0giKRqmtwnTTdj+lMVjI/4WhE4SaUex0RYuAT07bkGYzQ6wUYj6EAiTBWsjeeIQu69HrkPTH8lPFY=) 2026-04-04 00:20:35.061898 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqSRfO0AvcAiUGQLwuM7rUprq/iqrqpO0C2DbjEoVDQpmLZ20OzpVJiX0z32SQYLNeDG85v3BdTQkzi+3bYzM7WeIUQg2T2yj7AlbLHT7qW/WyBS2UrtVMCXToMfuLZku+hiYf4GRMK67zPLLL7jlZGR9ZKQa9NRKX71YgJcoTEKgV2GEVx2uiUhiJ03S78NN+37hYThn9Fc2qXSyxCXl4e4wJIhzqn91qrjHLFEJpcwpmDfE4SxgKZy9ncJDZyT2x8og+Ylmqrgf9ceKhtLYSHmg95X3IhIPpWN9bw/vbVkpZKDqhE3QY3uK73B6zH/XHWig/BJjAXXUqW+l+xMkz+guF/02D8iEORCaEIhz1Tx9U7Ga4VeMVuUQkkY6X6wr+myf2KMv0v1m3d/JOeyu6dbsyk3x3ic4aYr7wSOZxvTEMDz7xnFTUeIba1mR9/ZYdzvbopL1mwFd8zbapkH80IIf/dKv5DBo4QMlu+MkW/MYQWcYfNDe1owhhFgubTb0=) 2026-04-04 00:20:35.061902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHb8qDKPQN5HMUzqj5kYmx9mz5tD0OfMYPef7bjLeGD) 2026-04-04 00:20:35.061906 | orchestrator | 2026-04-04 00:20:35.061910 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:35.061914 | orchestrator | Saturday 04 April 2026 00:20:32 +0000 (0:00:00.897) 0:00:19.557 ******** 2026-04-04 00:20:35.061918 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE9C/xZ/qd9B8zPLUS5JkcBYmjUctPFHCh4VqZWhV2Pi) 2026-04-04 00:20:35.061922 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt4rGtsaQHLFwYnI+ZwkKVrv0EunKVBnlsKhnAwk7Q/DGE8cs/NaN1x+2uW4QlVYx7pxp0mJNuwVBNJUJxMcv3n2RRhDegHjxTPcN94xqoEx/O3K2O0I3qiK4MuU8x8qi9QT2rV/q7rw07lnIEBaFWbVUymtSdyMUbJPfhIFzij3LAYxE5J5dVKJ3uqsWjbUtw88DiHkFW4xrd4bb9ZSVwQPMZu6D5OqfaTlSun9d3u5uHDtaWfIZ9C25grCiHntmIwLBZN2nqEem9qlMTUx5yW8nrbjPFEaomHW+iffOF37dIwec3Ct/iULFwjK4tTb6I9Fbynpu+8ipqDZ7VzJ4OVjP0/i7t0JwiD5qJxHfWXKxhU3zwU7Jd6tzTv4GD/uyLdu4XD5f/fXwplOWgMTGCX6yjWj2UdccMa7h/Wc+AWoqvg4bkXB6KS1diXpwUs3Mk5e616zNiDv3P8F4dAXAhQDw0mb/QuUJexl/eJ65vDY7t+tn35a4ZfRdVTM7Gdis=) 2026-04-04 00:20:35.061930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHQmuLdB6oFW4aaHb7vAvd52/uHaALLpmUpzq3+Bd1hhC+8eEeBn2OTKGydS7LY1mTt6eubeztcF294Aa3wrOIg=) 2026-04-04 00:20:35.061934 | orchestrator | 2026-04-04 00:20:35.061938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:35.061942 | orchestrator | Saturday 04 April 2026 00:20:33 +0000 (0:00:00.939) 0:00:20.496 ******** 2026-04-04 00:20:35.061947 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGfS+ZkiJkowqnPIG3QkV8Qe4wIiSjuJAHNlmAD/pGVhEjdZbbaQFhhboWgMeFuLcTkLiVxBJTbxRhyVH+vlXT8=) 2026-04-04 00:20:35.061951 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwGkOGtvcdY0tKBtfwAutUWXLfJyo6CiEphmGG8es2p7cGwst7lFkCe1MjxSFbCjffHp7T01vFlaCuKOAdPRIPQG4todp9FqK4tL2UhtLQvqKwLVCzyzfOnoVIzl8JDNhQWdW9C//3h9ywrsoG7MHcD42JObq6xhwxeuTZ3sDmY/DYkFOwyn8cDDODdUIxKAhZGLANRXMbx9+Plwp5St9J76yEuHYFnb25/czLaTBByNQIQmSE7lFvKM0Qucm/eggPsKQJIxowuvn97ZDb5fhwbjBo06LWUh4lYFGcOoDOpspRMA496t1FdFqmzzNVkiFLYPnzMmdol99aB1aFDzmQW0ww5FQG/odi86/uvpUUJpPVDhDhzkJiAAqjRWGHnpQXzlWpMSYDZSzmuYc53C88LV3pBYtaPb2WQEjv5KOeCO0OgGfbSZyKK1ob3NucXQhPOSHgb5oTC9tRkDezr901VKmENH7jCvsfsiJ8W3vfyIBbI5EkpL9DcSfXSFDJMe0=) 2026-04-04 00:20:35.061955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFDSHGVRsYvW7RNPb1HFmkhZImuqCtD/SuocbHyWC7VF) 2026-04-04 00:20:35.061959 | orchestrator | 2026-04-04 00:20:35.061963 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:35.061967 | orchestrator | Saturday 04 April 2026 00:20:34 +0000 (0:00:00.947) 0:00:21.444 ******** 2026-04-04 00:20:35.061971 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJCX+9RlGUC8IBpzCCLkkpLfYUeBJrEqYpNBbiXPtyRw) 2026-04-04 00:20:35.061985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC38NqwTL5V5GtexWa365d5MUJq9ctX9xYUl2Ko8g07RXJledsdZukxJrttK31bl1NYzuG8GUN3ItxJPr9cCVAdAgMFv6lU4As45IYumKVAERby0FUyoPDlY4w95mvbYG6axCMqNQ2rTQ+RnDJ56Ra+CxVPdsVFkbunscelekkrEoLTB9M7TOpwX+KidTm/991cxXAwjFHkKjP2M0Itu+z3wbY2PYXRkPum/BHSGs75AI3XzPx70iwWrRUhqi7O59DrClzcaBazKuaz8kLiBxOrhuExJ+EVZQIqhvO+MktQOv1UM1sI6C5VatyoKEdwjimr+y99f1kZG8hsNSbRG2CYvXs2a4N4ZsdYnMSEzqQW1nbbVB4KctpKgi6ih+TN8FtWSbrVPcmU7CJqPzvHAnmKSv/aJh8zlSW1rXaj77Gs8DzxxujeIpcKTjaR0/0Z7GalvR2sXeJ6v7aGGEfd4YZU6ZWBPvQwgY3s44e0k6FP7t2TJGaVWF9rIp2veM0BX60=) 2026-04-04 00:20:39.034825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+9wJEbC1BzOBByAQjQIdv+copj0C/vmp6wEcSZ1Vj2RZ57CwZRnVSBGPNeb9eVHtLPF24Jf/1HkKk/ZRb3VdQ=) 2026-04-04 00:20:39.034924 | orchestrator | 2026-04-04 00:20:39.034940 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:39.034953 | orchestrator | Saturday 04 April 2026 00:20:35 +0000 (0:00:00.924) 0:00:22.368 ******** 2026-04-04 00:20:39.034966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkXSbQ0sTDxWOtBRWBb12Efxr5mOJHh3eUdLkWcgC/H3YswbVF72LCXR1tYQ6OmanUMJ3dDkuF+SOA/5BBxAqJYOUseUyz4QQQ/aQ2gnRopxYk29KrXEqs4XRqSv/OYY7VUakzQTbzXhmBblKtRQijdpvCKT0s65JgQzA/kgwbozngdfkWzYL67oDf/bjGdX/twPGbXT3H4tNBdiMtrmmY2NWsGyc2SxSJ1tHqgjb2l8Y3ELHJKBGfLtkf5E6Cm9dW52aAQc85Wvike7qE4IgLxpCV+bmzJlMp2UraCTCYZ9nlJIaQOS/zETEqxi0kHdy41yBALv4jhyOmB3R4n+8m7gO6poRc6E0Oo5qpPUucJShuHZKYAGjuChV1LAfN2WcVz/MtuoBgFrrEFldffybMq/Qn6BJ85TVK2inX0C9Glf4X75uPH9D2IW8uDbVLCOKF1/cIHdcZtZtcrI2gMVykLZagi/19ajlZ1sr+XEjxnaUOBjJYQDSzdmjQ7NZE0jk=) 2026-04-04 00:20:39.034979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEe5fNFarGfTGMt4gybCebj0hI1oBbZ7jNXOG8kdIz2QZ/nLcp1EMAZWWwCQ1eLXnFAR+bOnN5nBPmQP33n85A8=) 2026-04-04 00:20:39.035012 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGm6mOQCXUvp0Cpxg0E7h7KOv+w6hsk+nVOS3h/sOg2U) 2026-04-04 00:20:39.035023 | orchestrator | 2026-04-04 00:20:39.035048 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:39.035058 | orchestrator | Saturday 04 April 2026 00:20:36 +0000 (0:00:00.935) 0:00:23.303 ******** 2026-04-04 00:20:39.035068 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC78zDkHWnHWke1k1X+d/f/E0bm++f8DwvSPYqjRAWB3Ftx7wgx3XOEXE/gB9hMoD4LXuP+MvLt0WVohIJzfJ2U=) 2026-04-04 00:20:39.035079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuDBtEIljvP5iXmz0lovi5sdjKRKi0vdw44LT6M2w9NqYU7ptmizyLVAti7fG+zuz1A5x5OLV/PAts62LoDzcPzuJyuefklGHPsE48cyy3VCV8IGjtB66OtJ9tcI15a5A6d6H5RC3LdRATyZHFY9g/ElQHc4keMzXgTc4XXGXMXaQIAgMtjoiaagQI6VMlIHH3vDYzL0LF1Euk9GTIx+OQFTSBSeDqKRbjbcTlnmXbA0l5WEPVjMUydFbE/BTKlg+mxzva7sswYynN0MSVuGN8Ggz/HpC2FmY/nPNMubrccoOvsFKjbN9d6HeKC8MKJ7EWbFfw7Wsj2Pu+txoJJNWB92pH6xFnnd4sinsAdmYA11UuQAmOZlsZWmUDrYszHup8cF+WI9MxVqlkbSmRiuPyKc0gIkmJ2rf1ba5JDTtS6Gcs4bGHFQ7BlEcv4l5CfSsNzmzeydYp3aSNtK7rH9jewxrOcyyIHAz2udKm/EFV7X0yN4QLo5BNky2L7EtT3Oc=) 2026-04-04 00:20:39.035089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJhYobN+BO7NXuxnTWou/1k7JSAz9TmmAnBu1T9K4GW/) 2026-04-04 00:20:39.035099 | orchestrator | 2026-04-04 00:20:39.035109 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-04 00:20:39.035119 | orchestrator | Saturday 04 April 2026 00:20:37 +0000 (0:00:00.929) 0:00:24.233 ******** 2026-04-04 00:20:39.035134 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkan2QyZeHQiN6XxmLkZYoJU6Ply9+uu8GzUOLgQFZdOUG/wP9gCyGQtbyAbk11f0d2F30ZIZ+RLfCWInK0Me8e/e17fNTRljzYDCrFsZ1KfpYGoXYi1YdIFA5uvX0XRCMJz2omi1FCzbyDZayOSuqfGfNHNQ3EL5QaSvsEUNxpfIdw8E29H7m5xR3otYlp+lFDg26QNYJZaKfBAZ0iYD1WLHALGucTvVmvqRVL7HstHVm6S7etEnZdNX8/4WOG9wdl/qxO7509fB4kpfTizCQFqv45/4Ukj4u6S++i1kG12uaQogDkroHPJjbMelCav5Bx85qNeeZLQtcYHw1z3df5wt7rL4L68P3jKOm11i7sdwA9jw3SfjkPJPRW/eXIDpH2AbEsrQJuItnFgi8s5NqkjZ608nCSAaXvbawX5JU0IUhIAsvg/iki3AMc5QAkEomOif5PI9YQjRXdsCHut4smagwI2DA1oF0pSrQuBcpzfGl6upygi+O6KIWjFzneTs=) 2026-04-04 00:20:39.035150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJv3h05Z8BFL9lLFOnXeznqnF/DpHBdKFRkrSsJHWDMo0jTPE19p2ROuS9ryVgswJnj05ULyilvJWEcjIgvYNx0=) 2026-04-04 00:20:39.035169 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBpq67xYbElC6AlEJD3nCki+AA1A947tNU4WWtZ/xig6) 2026-04-04 00:20:39.035187 | orchestrator | 2026-04-04 00:20:39.035204 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-04 00:20:39.035221 | orchestrator | Saturday 04 April 2026 00:20:38 +0000 (0:00:00.948) 0:00:25.181 ******** 2026-04-04 00:20:39.035240 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-04 00:20:39.035258 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 00:20:39.035277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-04 00:20:39.035294 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-04 00:20:39.035332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-04 00:20:39.035400 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-04 00:20:39.035413 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-04 00:20:39.035425 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:20:39.035436 | orchestrator | 2026-04-04 00:20:39.035448 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-04 00:20:39.035460 | orchestrator | Saturday 04 April 2026 00:20:38 +0000 (0:00:00.161) 0:00:25.343 ******** 2026-04-04 00:20:39.035528 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:20:39.035540 | orchestrator | 2026-04-04 00:20:39.035551 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-04 00:20:39.035563 | orchestrator | Saturday 04 April 2026 00:20:38 +0000 (0:00:00.045) 0:00:25.388 ******** 2026-04-04 00:20:39.035575 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:20:39.035586 | orchestrator | 2026-04-04 00:20:39.035597 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-04 00:20:39.035608 | orchestrator | Saturday 04 April 2026 00:20:38 +0000 (0:00:00.059) 0:00:25.448 ******** 2026-04-04 00:20:39.035621 | orchestrator | changed: [testbed-manager] 2026-04-04 00:20:39.035632 | orchestrator | 2026-04-04 00:20:39.035643 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:20:39.035656 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:20:39.035668 | orchestrator | 2026-04-04 00:20:39.035680 | orchestrator | 2026-04-04 00:20:39.035692 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:20:39.035704 | orchestrator | Saturday 04 April 2026 00:20:38 +0000 (0:00:00.433) 0:00:25.881 ******** 2026-04-04 00:20:39.035716 | orchestrator | =============================================================================== 2026-04-04 00:20:39.035727 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2026-04-04 00:20:39.035736 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.12s 2026-04-04 00:20:39.035747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-04-04 00:20:39.035756 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-04-04 00:20:39.035766 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-04 00:20:39.035776 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-04 00:20:39.035785 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-04-04 00:20:39.035795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-04 00:20:39.035804 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-04 00:20:39.035814 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-04-04 00:20:39.035823 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-04-04 00:20:39.035832 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2026-04-04 00:20:39.035842 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-04-04 00:20:39.035860 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-04-04 00:20:39.035870 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-04-04 00:20:39.035880 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-04-04 00:20:39.035889 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.43s 2026-04-04 00:20:39.035899 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-04-04 00:20:39.035908 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.15s 2026-04-04 00:20:39.035919 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.14s 2026-04-04 00:20:39.159212 | orchestrator | + osism apply squid 2026-04-04 00:20:50.336291 | orchestrator | 2026-04-04 00:20:50 | INFO  | Prepare task for execution of squid. 2026-04-04 00:20:50.399830 | orchestrator | 2026-04-04 00:20:50 | INFO  | Task 0cf8460d-89a7-4bbd-90a6-747a52ea4dfc (squid) was prepared for execution. 2026-04-04 00:20:50.399953 | orchestrator | 2026-04-04 00:20:50 | INFO  | It takes a moment until task 0cf8460d-89a7-4bbd-90a6-747a52ea4dfc (squid) has been started and output is visible here. 2026-04-04 00:22:42.244453 | orchestrator | 2026-04-04 00:22:42.244628 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-04 00:22:42.244648 | orchestrator | 2026-04-04 00:22:42.244661 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-04 00:22:42.244673 | orchestrator | Saturday 04 April 2026 00:20:53 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-04-04 00:22:42.244684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:22:42.244697 | orchestrator | 2026-04-04 00:22:42.244708 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-04 00:22:42.244719 | orchestrator | Saturday 04 April 2026 00:20:53 +0000 (0:00:00.073) 0:00:00.245 ******** 2026-04-04 00:22:42.244730 | orchestrator | ok: [testbed-manager] 2026-04-04 00:22:42.244743 | orchestrator | 2026-04-04 00:22:42.244754 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-04 00:22:42.244765 | orchestrator | Saturday 04 April 2026 00:20:55 +0000 (0:00:02.023) 0:00:02.269 ******** 2026-04-04 00:22:42.244777 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-04 00:22:42.244788 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-04 00:22:42.244799 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-04 00:22:42.244811 | orchestrator | 2026-04-04 00:22:42.244822 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-04 00:22:42.244833 | orchestrator | Saturday 04 April 2026 00:20:56 +0000 (0:00:01.132) 0:00:03.401 ******** 2026-04-04 00:22:42.244844 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-04 00:22:42.244855 | orchestrator | 2026-04-04 00:22:42.244867 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-04 00:22:42.244878 | orchestrator | Saturday 04 April 2026 00:20:57 +0000 (0:00:00.907) 0:00:04.309 ******** 2026-04-04 00:22:42.244889 | orchestrator | ok: [testbed-manager] 2026-04-04 00:22:42.244900 | orchestrator | 2026-04-04 00:22:42.244911 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-04 00:22:42.244922 | orchestrator | Saturday 04 April 2026 00:20:57 +0000 (0:00:00.310) 0:00:04.619 ******** 2026-04-04 00:22:42.244933 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:42.244944 | orchestrator | 2026-04-04 00:22:42.244955 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-04 00:22:42.244966 | orchestrator | Saturday 04 April 2026 00:20:58 +0000 (0:00:00.803) 0:00:05.423 ******** 2026-04-04 00:22:42.244977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-04 00:22:42.244989 | orchestrator | ok: [testbed-manager] 2026-04-04 00:22:42.245002 | orchestrator | 2026-04-04 00:22:42.245015 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-04 00:22:42.245028 | orchestrator | Saturday 04 April 2026 00:21:29 +0000 (0:00:30.742) 0:00:36.166 ******** 2026-04-04 00:22:42.245042 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:42.245054 | orchestrator | 2026-04-04 00:22:42.245085 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-04 00:22:42.245097 | orchestrator | Saturday 04 April 2026 00:21:41 +0000 (0:00:11.967) 0:00:48.133 ******** 2026-04-04 00:22:42.245108 | orchestrator | Pausing for 60 seconds 2026-04-04 00:22:42.245120 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:42.245131 | orchestrator | 2026-04-04 00:22:42.245142 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-04 00:22:42.245153 | orchestrator | Saturday 04 April 2026 00:22:41 +0000 (0:01:00.073) 0:01:48.207 ******** 2026-04-04 00:22:42.245164 | orchestrator | ok: [testbed-manager] 2026-04-04 00:22:42.245176 | orchestrator | 2026-04-04 00:22:42.245187 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-04 00:22:42.245220 | orchestrator | Saturday 04 April 2026 00:22:41 +0000 (0:00:00.074) 0:01:48.281 ******** 2026-04-04 00:22:42.245231 | orchestrator | changed: [testbed-manager] 2026-04-04 00:22:42.245242 | orchestrator | 2026-04-04 00:22:42.245253 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:22:42.245264 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:22:42.245275 | orchestrator | 2026-04-04 00:22:42.245286 | orchestrator | 2026-04-04 00:22:42.245297 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:22:42.245308 | orchestrator | Saturday 04 April 2026 00:22:42 +0000 (0:00:00.603) 0:01:48.885 ******** 2026-04-04 00:22:42.245319 | orchestrator | =============================================================================== 2026-04-04 00:22:42.245330 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-04-04 00:22:42.245340 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.74s 2026-04-04 00:22:42.245351 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.97s 2026-04-04 00:22:42.245362 | orchestrator | osism.services.squid : Install required packages ------------------------ 2.02s 2026-04-04 00:22:42.245372 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2026-04-04 00:22:42.245383 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.91s 2026-04-04 00:22:42.245394 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2026-04-04 00:22:42.245404 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-04-04 00:22:42.245415 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-04-04 00:22:42.245426 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-04 00:22:42.245436 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-04 00:22:42.400389 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:22:42.400473 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-04-04 00:22:42.407287 | orchestrator | + set -e 2026-04-04 00:22:42.407379 | orchestrator | + NAMESPACE=kolla 2026-04-04 00:22:42.407397 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-04 00:22:42.413374 | orchestrator | ++ semver latest 9.0.0 2026-04-04 00:22:42.463106 | orchestrator | + [[ -1 -lt 0 ]] 2026-04-04 00:22:42.463200 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 00:22:42.463594 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-04 00:22:53.848225 | orchestrator | 2026-04-04 00:22:53 | INFO  | Prepare task for execution of operator. 2026-04-04 00:22:53.922274 | orchestrator | 2026-04-04 00:22:53 | INFO  | Task 8cd3c886-8755-4eed-a96e-0572deae18a6 (operator) was prepared for execution. 2026-04-04 00:22:53.922365 | orchestrator | 2026-04-04 00:22:53 | INFO  | It takes a moment until task 8cd3c886-8755-4eed-a96e-0572deae18a6 (operator) has been started and output is visible here. 2026-04-04 00:23:09.495189 | orchestrator | 2026-04-04 00:23:09.495314 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-04 00:23:09.495333 | orchestrator | 2026-04-04 00:23:09.495346 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 00:23:09.495358 | orchestrator | Saturday 04 April 2026 00:22:57 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-04-04 00:23:09.495370 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:23:09.495383 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:23:09.495394 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:23:09.495404 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:23:09.495415 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:23:09.495430 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:23:09.495441 | orchestrator | 2026-04-04 00:23:09.495452 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-04 00:23:09.495488 | orchestrator | Saturday 04 April 2026 00:23:01 +0000 (0:00:04.264) 0:00:04.447 ******** 2026-04-04 00:23:09.495500 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:23:09.495511 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:23:09.495521 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:23:09.495532 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:23:09.495610 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:23:09.495622 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:23:09.495633 | orchestrator | 2026-04-04 00:23:09.495643 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-04 00:23:09.495654 | orchestrator | 2026-04-04 00:23:09.495666 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-04 00:23:09.495680 | orchestrator | Saturday 04 April 2026 00:23:02 +0000 (0:00:00.840) 0:00:05.288 ******** 2026-04-04 00:23:09.495692 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:23:09.495709 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:23:09.495730 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:23:09.495749 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:23:09.495769 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:23:09.495791 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:23:09.495814 | orchestrator | 2026-04-04 00:23:09.495835 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-04 00:23:09.495853 | orchestrator | Saturday 04 April 2026 00:23:02 +0000 (0:00:00.139) 0:00:05.428 ******** 2026-04-04 00:23:09.495867 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:23:09.495878 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:23:09.495888 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:23:09.495899 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:23:09.495929 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:23:09.495940 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:23:09.495951 | orchestrator | 2026-04-04 00:23:09.495962 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-04 00:23:09.495973 | orchestrator | Saturday 04 April 2026 00:23:02 +0000 (0:00:00.123) 0:00:05.551 ******** 2026-04-04 00:23:09.495984 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:09.495996 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:09.496007 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:09.496018 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:09.496029 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:09.496039 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:09.496050 | orchestrator | 2026-04-04 00:23:09.496061 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-04 00:23:09.496072 | orchestrator | Saturday 04 April 2026 00:23:03 +0000 (0:00:00.671) 0:00:06.223 ******** 2026-04-04 00:23:09.496083 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:09.496094 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:09.496104 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:09.496115 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:09.496126 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:09.496136 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:09.496147 | orchestrator | 2026-04-04 00:23:09.496158 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-04 00:23:09.496169 | orchestrator | Saturday 04 April 2026 00:23:03 +0000 (0:00:00.787) 0:00:07.010 ******** 2026-04-04 00:23:09.496180 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-04 00:23:09.496192 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-04 00:23:09.496202 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-04 00:23:09.496213 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-04 00:23:09.496224 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-04 00:23:09.496234 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-04 00:23:09.496245 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-04 00:23:09.496255 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-04 00:23:09.496277 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-04 00:23:09.496287 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-04 00:23:09.496298 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-04 00:23:09.496309 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-04 00:23:09.496319 | orchestrator | 2026-04-04 00:23:09.496330 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-04 00:23:09.496341 | orchestrator | Saturday 04 April 2026 00:23:05 +0000 (0:00:01.158) 0:00:08.169 ******** 2026-04-04 00:23:09.496352 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:09.496363 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:09.496373 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:09.496384 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:09.496394 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:09.496405 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:09.496415 | orchestrator | 2026-04-04 00:23:09.496426 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-04 00:23:09.496438 | orchestrator | Saturday 04 April 2026 00:23:06 +0000 (0:00:01.322) 0:00:09.491 ******** 2026-04-04 00:23:09.496449 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496461 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496471 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496482 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496493 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496523 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-04 00:23:09.496562 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496576 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496586 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496597 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496608 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496619 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-04 00:23:09.496630 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496641 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-04 00:23:09.496652 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-04 00:23:09.496663 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-04 00:23:09.496674 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496685 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496696 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496707 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496718 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-04 00:23:09.496729 | orchestrator | 2026-04-04 00:23:09.496740 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-04 00:23:09.496752 | orchestrator | Saturday 04 April 2026 00:23:07 +0000 (0:00:01.231) 0:00:10.722 ******** 2026-04-04 00:23:09.496763 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:09.496774 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:09.496785 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:09.496802 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:09.496813 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:09.496824 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:09.496835 | orchestrator | 2026-04-04 00:23:09.496846 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-04 00:23:09.496864 | orchestrator | Saturday 04 April 2026 00:23:07 +0000 (0:00:00.158) 0:00:10.881 ******** 2026-04-04 00:23:09.496876 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:09.496887 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:09.496898 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:09.496909 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:09.496920 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:09.496931 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:09.496942 | orchestrator | 2026-04-04 00:23:09.496953 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-04 00:23:09.496964 | orchestrator | Saturday 04 April 2026 00:23:07 +0000 (0:00:00.167) 0:00:11.049 ******** 2026-04-04 00:23:09.496975 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:09.496986 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:09.496997 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:09.497007 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:09.497018 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:09.497029 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:09.497040 | orchestrator | 2026-04-04 00:23:09.497051 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-04 00:23:09.497062 | orchestrator | Saturday 04 April 2026 00:23:08 +0000 (0:00:00.506) 0:00:11.555 ******** 2026-04-04 00:23:09.497073 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:09.497084 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:09.497094 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:09.497105 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:09.497116 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:09.497127 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:09.497138 | orchestrator | 2026-04-04 00:23:09.497149 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-04 00:23:09.497160 | orchestrator | Saturday 04 April 2026 00:23:08 +0000 (0:00:00.147) 0:00:11.703 ******** 2026-04-04 00:23:09.497171 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:23:09.497182 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:23:09.497193 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:23:09.497204 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:23:09.497215 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:09.497225 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:09.497236 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:09.497247 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:09.497258 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-04 00:23:09.497269 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:09.497280 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-04 00:23:09.497290 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:09.497301 | orchestrator | 2026-04-04 00:23:09.497312 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-04 00:23:09.497323 | orchestrator | Saturday 04 April 2026 00:23:09 +0000 (0:00:00.662) 0:00:12.366 ******** 2026-04-04 00:23:09.497334 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:09.497345 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:09.497356 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:09.497372 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:09.497393 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:09.497411 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:09.497429 | orchestrator | 2026-04-04 00:23:09.497447 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-04 00:23:09.497466 | orchestrator | Saturday 04 April 2026 00:23:09 +0000 (0:00:00.136) 0:00:12.502 ******** 2026-04-04 00:23:09.497485 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:09.497504 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:09.497523 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:09.497576 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:09.497610 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:10.664329 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:10.664436 | orchestrator | 2026-04-04 00:23:10.664453 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-04 00:23:10.664467 | orchestrator | Saturday 04 April 2026 00:23:09 +0000 (0:00:00.127) 0:00:12.630 ******** 2026-04-04 00:23:10.664479 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:10.664491 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:10.664502 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:10.664513 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:10.664525 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:10.664590 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:10.664605 | orchestrator | 2026-04-04 00:23:10.664628 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-04 00:23:10.664640 | orchestrator | Saturday 04 April 2026 00:23:09 +0000 (0:00:00.129) 0:00:12.760 ******** 2026-04-04 00:23:10.664651 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:23:10.664662 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:23:10.664672 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:23:10.664683 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:23:10.664693 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:23:10.664704 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:23:10.664715 | orchestrator | 2026-04-04 00:23:10.664725 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-04 00:23:10.664736 | orchestrator | Saturday 04 April 2026 00:23:10 +0000 (0:00:00.632) 0:00:13.393 ******** 2026-04-04 00:23:10.664747 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:23:10.664758 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:23:10.664768 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:23:10.664779 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:23:10.664789 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:23:10.664800 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:23:10.664810 | orchestrator | 2026-04-04 00:23:10.664821 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:23:10.664833 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664868 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664883 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664895 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664908 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664920 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 00:23:10.664933 | orchestrator | 2026-04-04 00:23:10.664945 | orchestrator | 2026-04-04 00:23:10.664958 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:23:10.664970 | orchestrator | Saturday 04 April 2026 00:23:10 +0000 (0:00:00.212) 0:00:13.605 ******** 2026-04-04 00:23:10.664983 | orchestrator | =============================================================================== 2026-04-04 00:23:10.664995 | orchestrator | Gathering Facts --------------------------------------------------------- 4.26s 2026-04-04 00:23:10.665008 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2026-04-04 00:23:10.665021 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.23s 2026-04-04 00:23:10.665058 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-04-04 00:23:10.665072 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2026-04-04 00:23:10.665085 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-04-04 00:23:10.665097 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-04-04 00:23:10.665109 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2026-04-04 00:23:10.665121 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-04-04 00:23:10.665134 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.51s 2026-04-04 00:23:10.665147 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-04 00:23:10.665159 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-04-04 00:23:10.665172 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-04-04 00:23:10.665185 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-04-04 00:23:10.665196 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-04-04 00:23:10.665209 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2026-04-04 00:23:10.665222 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.13s 2026-04-04 00:23:10.665234 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-04-04 00:23:10.665244 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.12s 2026-04-04 00:23:10.834475 | orchestrator | + osism apply --environment custom facts 2026-04-04 00:23:12.023404 | orchestrator | 2026-04-04 00:23:12 | INFO  | Trying to run play facts in environment custom 2026-04-04 00:23:22.057234 | orchestrator | 2026-04-04 00:23:22 | INFO  | Prepare task for execution of facts. 2026-04-04 00:23:22.140947 | orchestrator | 2026-04-04 00:23:22 | INFO  | Task 9a812fd0-db2e-454b-b0ab-460d96433df0 (facts) was prepared for execution. 2026-04-04 00:23:22.141043 | orchestrator | 2026-04-04 00:23:22 | INFO  | It takes a moment until task 9a812fd0-db2e-454b-b0ab-460d96433df0 (facts) has been started and output is visible here. 2026-04-04 00:24:04.713908 | orchestrator | 2026-04-04 00:24:04.714073 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-04 00:24:04.714095 | orchestrator | 2026-04-04 00:24:04.714108 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:24:04.714120 | orchestrator | Saturday 04 April 2026 00:23:25 +0000 (0:00:00.115) 0:00:00.115 ******** 2026-04-04 00:24:04.714131 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:04.714143 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.714154 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:04.714165 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.714176 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.714187 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:04.714197 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:04.714209 | orchestrator | 2026-04-04 00:24:04.714220 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-04 00:24:04.714232 | orchestrator | Saturday 04 April 2026 00:23:26 +0000 (0:00:01.413) 0:00:01.529 ******** 2026-04-04 00:24:04.714243 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:04.714254 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.714265 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.714277 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:04.714288 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:04.714315 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.714326 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:04.714361 | orchestrator | 2026-04-04 00:24:04.714373 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-04 00:24:04.714384 | orchestrator | 2026-04-04 00:24:04.714395 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:24:04.714406 | orchestrator | Saturday 04 April 2026 00:23:27 +0000 (0:00:01.284) 0:00:02.813 ******** 2026-04-04 00:24:04.714417 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.714428 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.714442 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.714454 | orchestrator | 2026-04-04 00:24:04.714467 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:24:04.714481 | orchestrator | Saturday 04 April 2026 00:23:27 +0000 (0:00:00.097) 0:00:02.911 ******** 2026-04-04 00:24:04.714494 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.714506 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.714519 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.714531 | orchestrator | 2026-04-04 00:24:04.714543 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:24:04.714556 | orchestrator | Saturday 04 April 2026 00:23:28 +0000 (0:00:00.196) 0:00:03.107 ******** 2026-04-04 00:24:04.714620 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.714633 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.714646 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.714658 | orchestrator | 2026-04-04 00:24:04.714671 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:24:04.714684 | orchestrator | Saturday 04 April 2026 00:23:28 +0000 (0:00:00.198) 0:00:03.306 ******** 2026-04-04 00:24:04.714699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:04.714713 | orchestrator | 2026-04-04 00:24:04.714727 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:24:04.714740 | orchestrator | Saturday 04 April 2026 00:23:28 +0000 (0:00:00.126) 0:00:03.433 ******** 2026-04-04 00:24:04.714752 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.714765 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.714778 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.714791 | orchestrator | 2026-04-04 00:24:04.714804 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:24:04.714815 | orchestrator | Saturday 04 April 2026 00:23:28 +0000 (0:00:00.419) 0:00:03.852 ******** 2026-04-04 00:24:04.714826 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:04.714837 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:04.714848 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:04.714859 | orchestrator | 2026-04-04 00:24:04.714870 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:24:04.714881 | orchestrator | Saturday 04 April 2026 00:23:28 +0000 (0:00:00.126) 0:00:03.979 ******** 2026-04-04 00:24:04.714892 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.714903 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.714914 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.714925 | orchestrator | 2026-04-04 00:24:04.714936 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:24:04.714947 | orchestrator | Saturday 04 April 2026 00:23:30 +0000 (0:00:01.059) 0:00:05.039 ******** 2026-04-04 00:24:04.714958 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.714969 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.714980 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.714991 | orchestrator | 2026-04-04 00:24:04.715002 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:24:04.715013 | orchestrator | Saturday 04 April 2026 00:23:30 +0000 (0:00:00.463) 0:00:05.503 ******** 2026-04-04 00:24:04.715024 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.715036 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.715046 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.715065 | orchestrator | 2026-04-04 00:24:04.715076 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:24:04.715087 | orchestrator | Saturday 04 April 2026 00:23:31 +0000 (0:00:01.036) 0:00:06.539 ******** 2026-04-04 00:24:04.715098 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.715109 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.715120 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.715131 | orchestrator | 2026-04-04 00:24:04.715142 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-04 00:24:04.715153 | orchestrator | Saturday 04 April 2026 00:23:47 +0000 (0:00:16.391) 0:00:22.931 ******** 2026-04-04 00:24:04.715164 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:04.715175 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:04.715186 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:04.715197 | orchestrator | 2026-04-04 00:24:04.715208 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-04 00:24:04.715252 | orchestrator | Saturday 04 April 2026 00:23:48 +0000 (0:00:00.114) 0:00:23.046 ******** 2026-04-04 00:24:04.715274 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:04.715293 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:04.715314 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:04.715334 | orchestrator | 2026-04-04 00:24:04.715356 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-04 00:24:04.715377 | orchestrator | Saturday 04 April 2026 00:23:55 +0000 (0:00:07.590) 0:00:30.637 ******** 2026-04-04 00:24:04.715399 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.715420 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.715442 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.715462 | orchestrator | 2026-04-04 00:24:04.715483 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-04 00:24:04.715505 | orchestrator | Saturday 04 April 2026 00:23:56 +0000 (0:00:00.460) 0:00:31.097 ******** 2026-04-04 00:24:04.715526 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-04 00:24:04.715547 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-04 00:24:04.715592 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-04 00:24:04.715613 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-04 00:24:04.715632 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-04 00:24:04.715652 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-04 00:24:04.715673 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-04 00:24:04.715692 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-04 00:24:04.715711 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-04 00:24:04.715731 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:24:04.715749 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:24:04.715769 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-04 00:24:04.715789 | orchestrator | 2026-04-04 00:24:04.715809 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:24:04.715829 | orchestrator | Saturday 04 April 2026 00:23:59 +0000 (0:00:03.436) 0:00:34.534 ******** 2026-04-04 00:24:04.715849 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.715869 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.715889 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.715909 | orchestrator | 2026-04-04 00:24:04.715928 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:24:04.715949 | orchestrator | 2026-04-04 00:24:04.715968 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:24:04.715989 | orchestrator | Saturday 04 April 2026 00:24:00 +0000 (0:00:01.422) 0:00:35.957 ******** 2026-04-04 00:24:04.716024 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:04.716043 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:04.716063 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:04.716083 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:04.716103 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:04.716179 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:04.716202 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:04.716223 | orchestrator | 2026-04-04 00:24:04.716243 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:24:04.716264 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:24:04.716284 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:24:04.716306 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:24:04.716326 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:24:04.716346 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:24:04.716367 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:24:04.716388 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:24:04.716407 | orchestrator | 2026-04-04 00:24:04.716426 | orchestrator | 2026-04-04 00:24:04.716446 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:24:04.716467 | orchestrator | Saturday 04 April 2026 00:24:04 +0000 (0:00:03.716) 0:00:39.674 ******** 2026-04-04 00:24:04.716486 | orchestrator | =============================================================================== 2026-04-04 00:24:04.716506 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.39s 2026-04-04 00:24:04.716527 | orchestrator | Install required packages (Debian) -------------------------------------- 7.59s 2026-04-04 00:24:04.716546 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.72s 2026-04-04 00:24:04.716681 | orchestrator | Copy fact files --------------------------------------------------------- 3.44s 2026-04-04 00:24:04.716704 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.42s 2026-04-04 00:24:04.716724 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-04-04 00:24:04.716761 | orchestrator | Copy fact file ---------------------------------------------------------- 1.28s 2026-04-04 00:24:04.879977 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-04-04 00:24:04.880081 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-04-04 00:24:04.880095 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-04-04 00:24:04.880106 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-04-04 00:24:04.880116 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-04-04 00:24:04.880126 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-04 00:24:04.880136 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-04-04 00:24:04.880146 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-04-04 00:24:04.880156 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-04-04 00:24:04.880185 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-04-04 00:24:04.880217 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-04-04 00:24:05.055153 | orchestrator | + osism apply bootstrap 2026-04-04 00:24:16.243845 | orchestrator | 2026-04-04 00:24:16 | INFO  | Prepare task for execution of bootstrap. 2026-04-04 00:24:16.315748 | orchestrator | 2026-04-04 00:24:16 | INFO  | Task 024565de-f675-4520-b54e-7c25346e3af7 (bootstrap) was prepared for execution. 2026-04-04 00:24:16.315849 | orchestrator | 2026-04-04 00:24:16 | INFO  | It takes a moment until task 024565de-f675-4520-b54e-7c25346e3af7 (bootstrap) has been started and output is visible here. 2026-04-04 00:24:31.729734 | orchestrator | 2026-04-04 00:24:31.729831 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-04 00:24:31.729844 | orchestrator | 2026-04-04 00:24:31.729852 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-04 00:24:31.729860 | orchestrator | Saturday 04 April 2026 00:24:19 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-04 00:24:31.729868 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:31.729877 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:31.729885 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:31.729892 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:31.729899 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:31.729906 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:31.729913 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:31.729920 | orchestrator | 2026-04-04 00:24:31.729928 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:24:31.729935 | orchestrator | 2026-04-04 00:24:31.729942 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:24:31.729949 | orchestrator | Saturday 04 April 2026 00:24:19 +0000 (0:00:00.296) 0:00:00.487 ******** 2026-04-04 00:24:31.729957 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:31.729964 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:31.729973 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:31.729980 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:31.729987 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:31.729994 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:31.730001 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:31.730009 | orchestrator | 2026-04-04 00:24:31.730070 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-04 00:24:31.730079 | orchestrator | 2026-04-04 00:24:31.730086 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:24:31.730093 | orchestrator | Saturday 04 April 2026 00:24:24 +0000 (0:00:04.839) 0:00:05.327 ******** 2026-04-04 00:24:31.730102 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-04 00:24:31.730109 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 00:24:31.730122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-04 00:24:31.730128 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-04 00:24:31.730135 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:24:31.730143 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-04 00:24:31.730150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-04 00:24:31.730158 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-04 00:24:31.730166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:24:31.730173 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-04 00:24:31.730180 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-04 00:24:31.730187 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-04 00:24:31.730195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:24:31.730202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-04 00:24:31.730209 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-04 00:24:31.730237 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-04 00:24:31.730245 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-04 00:24:31.730251 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:31.730259 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-04 00:24:31.730266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-04 00:24:31.730273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-04 00:24:31.730281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:24:31.730288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:24:31.730295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-04 00:24:31.730302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:24:31.730309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-04 00:24:31.730315 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-04 00:24:31.730322 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:24:31.730329 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:24:31.730336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:24:31.730343 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-04 00:24:31.730350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:24:31.730357 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:24:31.730364 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-04 00:24:31.730371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-04 00:24:31.730378 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:31.730385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-04 00:24:31.730392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-04 00:24:31.730400 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-04 00:24:31.730407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:24:31.730414 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-04 00:24:31.730422 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-04 00:24:31.730429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:24:31.730436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-04 00:24:31.730443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-04 00:24:31.730450 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:31.730470 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-04 00:24:31.730478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:24:31.730485 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-04 00:24:31.730492 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:31.730499 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-04 00:24:31.730506 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:31.730512 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-04 00:24:31.730519 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:31.730526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:24:31.730533 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:31.730540 | orchestrator | 2026-04-04 00:24:31.730547 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-04 00:24:31.730554 | orchestrator | 2026-04-04 00:24:31.730561 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-04 00:24:31.730590 | orchestrator | Saturday 04 April 2026 00:24:25 +0000 (0:00:00.407) 0:00:05.735 ******** 2026-04-04 00:24:31.730598 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:31.730611 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:31.730622 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:31.730628 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:31.730635 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:31.730642 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:31.730649 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:31.730657 | orchestrator | 2026-04-04 00:24:31.730664 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-04 00:24:31.730671 | orchestrator | Saturday 04 April 2026 00:24:26 +0000 (0:00:01.163) 0:00:06.898 ******** 2026-04-04 00:24:31.730678 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:31.730686 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:31.730693 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:31.730700 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:31.730707 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:31.730714 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:31.730721 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:31.730729 | orchestrator | 2026-04-04 00:24:31.730735 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-04 00:24:31.730742 | orchestrator | Saturday 04 April 2026 00:24:27 +0000 (0:00:01.312) 0:00:08.211 ******** 2026-04-04 00:24:31.730750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:31.730759 | orchestrator | 2026-04-04 00:24:31.730766 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-04 00:24:31.730773 | orchestrator | Saturday 04 April 2026 00:24:27 +0000 (0:00:00.255) 0:00:08.466 ******** 2026-04-04 00:24:31.730780 | orchestrator | changed: [testbed-manager] 2026-04-04 00:24:31.730786 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:31.730793 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:31.730800 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:31.730807 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:31.730813 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:31.730818 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:31.730824 | orchestrator | 2026-04-04 00:24:31.730830 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-04 00:24:31.730836 | orchestrator | Saturday 04 April 2026 00:24:29 +0000 (0:00:01.496) 0:00:09.962 ******** 2026-04-04 00:24:31.730879 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:31.730889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:31.730898 | orchestrator | 2026-04-04 00:24:31.730906 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-04 00:24:31.730914 | orchestrator | Saturday 04 April 2026 00:24:29 +0000 (0:00:00.283) 0:00:10.245 ******** 2026-04-04 00:24:31.730921 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:31.730929 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:31.730937 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:31.730944 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:31.730952 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:31.730969 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:31.730977 | orchestrator | 2026-04-04 00:24:31.730984 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-04 00:24:31.730992 | orchestrator | Saturday 04 April 2026 00:24:30 +0000 (0:00:01.029) 0:00:11.275 ******** 2026-04-04 00:24:31.731000 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:31.731007 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:31.731015 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:31.731022 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:31.731030 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:31.731037 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:31.731051 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:31.731059 | orchestrator | 2026-04-04 00:24:31.731067 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-04 00:24:31.731077 | orchestrator | Saturday 04 April 2026 00:24:31 +0000 (0:00:00.557) 0:00:11.833 ******** 2026-04-04 00:24:31.731085 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:31.731093 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:31.731100 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:31.731108 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:31.731116 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:31.731123 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:31.731131 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:31.731141 | orchestrator | 2026-04-04 00:24:31.731148 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-04 00:24:31.731157 | orchestrator | Saturday 04 April 2026 00:24:31 +0000 (0:00:00.397) 0:00:12.230 ******** 2026-04-04 00:24:31.731165 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:31.731172 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:31.731187 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:42.868694 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:42.868869 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:42.868889 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:42.868901 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:42.868913 | orchestrator | 2026-04-04 00:24:42.868926 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-04 00:24:42.868939 | orchestrator | Saturday 04 April 2026 00:24:31 +0000 (0:00:00.206) 0:00:12.437 ******** 2026-04-04 00:24:42.868953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:42.868982 | orchestrator | 2026-04-04 00:24:42.868994 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-04 00:24:42.869006 | orchestrator | Saturday 04 April 2026 00:24:32 +0000 (0:00:00.269) 0:00:12.706 ******** 2026-04-04 00:24:42.869018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:42.869030 | orchestrator | 2026-04-04 00:24:42.869041 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-04 00:24:42.869052 | orchestrator | Saturday 04 April 2026 00:24:32 +0000 (0:00:00.272) 0:00:12.979 ******** 2026-04-04 00:24:42.869063 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.869075 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.869086 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.869096 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.869107 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.869119 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.869133 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.869153 | orchestrator | 2026-04-04 00:24:42.869178 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-04 00:24:42.869204 | orchestrator | Saturday 04 April 2026 00:24:33 +0000 (0:00:01.254) 0:00:14.233 ******** 2026-04-04 00:24:42.869223 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:42.869243 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:42.869262 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:42.869279 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:42.869298 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:42.869318 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:42.869331 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:42.869344 | orchestrator | 2026-04-04 00:24:42.869358 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-04 00:24:42.869396 | orchestrator | Saturday 04 April 2026 00:24:33 +0000 (0:00:00.211) 0:00:14.444 ******** 2026-04-04 00:24:42.869410 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.869423 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.869435 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.869448 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.869460 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.869473 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.869483 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.869494 | orchestrator | 2026-04-04 00:24:42.869505 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-04 00:24:42.869515 | orchestrator | Saturday 04 April 2026 00:24:34 +0000 (0:00:00.553) 0:00:14.998 ******** 2026-04-04 00:24:42.869526 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:42.869537 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:42.869547 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:42.869558 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:42.869568 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:42.869680 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:42.869708 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:42.869730 | orchestrator | 2026-04-04 00:24:42.869749 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-04 00:24:42.869769 | orchestrator | Saturday 04 April 2026 00:24:34 +0000 (0:00:00.223) 0:00:15.222 ******** 2026-04-04 00:24:42.869786 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.869803 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:42.869820 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:42.869837 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:42.869855 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:42.869873 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:42.869893 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:42.869911 | orchestrator | 2026-04-04 00:24:42.869930 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-04 00:24:42.869946 | orchestrator | Saturday 04 April 2026 00:24:35 +0000 (0:00:00.538) 0:00:15.761 ******** 2026-04-04 00:24:42.869957 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.869967 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:42.869978 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:42.869989 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:42.870000 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:42.870010 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:42.870084 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:42.870095 | orchestrator | 2026-04-04 00:24:42.870118 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-04 00:24:42.870130 | orchestrator | Saturday 04 April 2026 00:24:36 +0000 (0:00:01.096) 0:00:16.857 ******** 2026-04-04 00:24:42.870141 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.870152 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.870163 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.870173 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.870184 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.870195 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.870205 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.870216 | orchestrator | 2026-04-04 00:24:42.870226 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-04 00:24:42.870238 | orchestrator | Saturday 04 April 2026 00:24:37 +0000 (0:00:00.957) 0:00:17.815 ******** 2026-04-04 00:24:42.870273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:42.870286 | orchestrator | 2026-04-04 00:24:42.870297 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-04 00:24:42.870320 | orchestrator | Saturday 04 April 2026 00:24:37 +0000 (0:00:00.300) 0:00:18.115 ******** 2026-04-04 00:24:42.870331 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:42.870342 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:24:42.870353 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:24:42.870363 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:42.870374 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:24:42.870384 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:42.870395 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:42.870406 | orchestrator | 2026-04-04 00:24:42.870416 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-04 00:24:42.870427 | orchestrator | Saturday 04 April 2026 00:24:38 +0000 (0:00:01.184) 0:00:19.300 ******** 2026-04-04 00:24:42.870438 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.870449 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.870459 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.870470 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.870480 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.870490 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.870501 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.870511 | orchestrator | 2026-04-04 00:24:42.870522 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-04 00:24:42.870533 | orchestrator | Saturday 04 April 2026 00:24:38 +0000 (0:00:00.208) 0:00:19.508 ******** 2026-04-04 00:24:42.870544 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.870555 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.870566 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.870599 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.870610 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.870620 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.870631 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.870641 | orchestrator | 2026-04-04 00:24:42.870652 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-04 00:24:42.870663 | orchestrator | Saturday 04 April 2026 00:24:39 +0000 (0:00:00.215) 0:00:19.724 ******** 2026-04-04 00:24:42.870674 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.870685 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.870695 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.870706 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.870716 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.870727 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.870737 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.870748 | orchestrator | 2026-04-04 00:24:42.870759 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-04 00:24:42.870770 | orchestrator | Saturday 04 April 2026 00:24:39 +0000 (0:00:00.202) 0:00:19.927 ******** 2026-04-04 00:24:42.870782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:24:42.870794 | orchestrator | 2026-04-04 00:24:42.870806 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-04 00:24:42.870816 | orchestrator | Saturday 04 April 2026 00:24:39 +0000 (0:00:00.269) 0:00:20.196 ******** 2026-04-04 00:24:42.870827 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.870838 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.870848 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.870859 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.870869 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.870879 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.870890 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.870900 | orchestrator | 2026-04-04 00:24:42.870911 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-04 00:24:42.870922 | orchestrator | Saturday 04 April 2026 00:24:40 +0000 (0:00:00.508) 0:00:20.704 ******** 2026-04-04 00:24:42.870933 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:24:42.870951 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:24:42.870962 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:24:42.870972 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:24:42.870983 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:24:42.870994 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:24:42.871004 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:24:42.871015 | orchestrator | 2026-04-04 00:24:42.871026 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-04 00:24:42.871037 | orchestrator | Saturday 04 April 2026 00:24:40 +0000 (0:00:00.208) 0:00:20.913 ******** 2026-04-04 00:24:42.871047 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.871058 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:42.871069 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:24:42.871080 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:24:42.871091 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.871101 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.871112 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.871122 | orchestrator | 2026-04-04 00:24:42.871133 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-04 00:24:42.871145 | orchestrator | Saturday 04 April 2026 00:24:41 +0000 (0:00:01.061) 0:00:21.975 ******** 2026-04-04 00:24:42.871155 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.871166 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:24:42.871177 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:24:42.871187 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:24:42.871198 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.871208 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:24:42.871219 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.871229 | orchestrator | 2026-04-04 00:24:42.871240 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-04 00:24:42.871251 | orchestrator | Saturday 04 April 2026 00:24:41 +0000 (0:00:00.533) 0:00:22.508 ******** 2026-04-04 00:24:42.871262 | orchestrator | ok: [testbed-manager] 2026-04-04 00:24:42.871273 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:24:42.871283 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:24:42.871294 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:24:42.871312 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.219887 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.220012 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.220040 | orchestrator | 2026-04-04 00:25:23.220062 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-04 00:25:23.220084 | orchestrator | Saturday 04 April 2026 00:24:42 +0000 (0:00:01.104) 0:00:23.613 ******** 2026-04-04 00:25:23.220103 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.220123 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.220142 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.220161 | orchestrator | changed: [testbed-manager] 2026-04-04 00:25:23.220179 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.220199 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.220216 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:23.220234 | orchestrator | 2026-04-04 00:25:23.220253 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-04 00:25:23.220273 | orchestrator | Saturday 04 April 2026 00:24:59 +0000 (0:00:16.491) 0:00:40.104 ******** 2026-04-04 00:25:23.220293 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.220311 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.220330 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.220349 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.220368 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.220387 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.220408 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.220429 | orchestrator | 2026-04-04 00:25:23.220448 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-04 00:25:23.220467 | orchestrator | Saturday 04 April 2026 00:24:59 +0000 (0:00:00.206) 0:00:40.311 ******** 2026-04-04 00:25:23.220525 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.220548 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.220604 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.220627 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.220645 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.220665 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.220683 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.220702 | orchestrator | 2026-04-04 00:25:23.220722 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-04 00:25:23.220742 | orchestrator | Saturday 04 April 2026 00:24:59 +0000 (0:00:00.227) 0:00:40.539 ******** 2026-04-04 00:25:23.220761 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.220779 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.220799 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.220818 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.220836 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.220854 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.220872 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.220892 | orchestrator | 2026-04-04 00:25:23.220912 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-04 00:25:23.220941 | orchestrator | Saturday 04 April 2026 00:25:00 +0000 (0:00:00.207) 0:00:40.746 ******** 2026-04-04 00:25:23.220964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:25:23.220986 | orchestrator | 2026-04-04 00:25:23.221005 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-04 00:25:23.221024 | orchestrator | Saturday 04 April 2026 00:25:00 +0000 (0:00:00.288) 0:00:41.035 ******** 2026-04-04 00:25:23.221043 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.221061 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.221078 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.221097 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.221141 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.221164 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.221183 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.221202 | orchestrator | 2026-04-04 00:25:23.221221 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-04 00:25:23.221239 | orchestrator | Saturday 04 April 2026 00:25:02 +0000 (0:00:01.737) 0:00:42.772 ******** 2026-04-04 00:25:23.221258 | orchestrator | changed: [testbed-manager] 2026-04-04 00:25:23.221276 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:23.221293 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.221312 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:23.221331 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:23.221348 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.221363 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:23.221381 | orchestrator | 2026-04-04 00:25:23.221399 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-04 00:25:23.221418 | orchestrator | Saturday 04 April 2026 00:25:03 +0000 (0:00:01.107) 0:00:43.880 ******** 2026-04-04 00:25:23.221436 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.221455 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.221474 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.221491 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.221511 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.221530 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.221549 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.221595 | orchestrator | 2026-04-04 00:25:23.221609 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-04 00:25:23.221620 | orchestrator | Saturday 04 April 2026 00:25:04 +0000 (0:00:00.807) 0:00:44.687 ******** 2026-04-04 00:25:23.221645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:25:23.221683 | orchestrator | 2026-04-04 00:25:23.221697 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-04 00:25:23.221718 | orchestrator | Saturday 04 April 2026 00:25:04 +0000 (0:00:00.299) 0:00:44.986 ******** 2026-04-04 00:25:23.221736 | orchestrator | changed: [testbed-manager] 2026-04-04 00:25:23.221748 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.221759 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:23.221770 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.221780 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:23.221791 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:23.221802 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:23.221814 | orchestrator | 2026-04-04 00:25:23.221863 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-04 00:25:23.221885 | orchestrator | Saturday 04 April 2026 00:25:05 +0000 (0:00:01.010) 0:00:45.997 ******** 2026-04-04 00:25:23.221904 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:25:23.221924 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:25:23.221942 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:25:23.221959 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:25:23.221977 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:25:23.221994 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:25:23.222010 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:25:23.222115 | orchestrator | 2026-04-04 00:25:23.222136 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-04 00:25:23.222154 | orchestrator | Saturday 04 April 2026 00:25:05 +0000 (0:00:00.233) 0:00:46.231 ******** 2026-04-04 00:25:23.222166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:25:23.222178 | orchestrator | 2026-04-04 00:25:23.222188 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-04 00:25:23.222200 | orchestrator | Saturday 04 April 2026 00:25:05 +0000 (0:00:00.287) 0:00:46.518 ******** 2026-04-04 00:25:23.222218 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.222238 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.222249 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.222260 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.222271 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.222281 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.222292 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.222302 | orchestrator | 2026-04-04 00:25:23.222313 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-04 00:25:23.222324 | orchestrator | Saturday 04 April 2026 00:25:07 +0000 (0:00:02.020) 0:00:48.538 ******** 2026-04-04 00:25:23.222335 | orchestrator | changed: [testbed-manager] 2026-04-04 00:25:23.222346 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:23.222356 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.222367 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.222378 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:23.222388 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:23.222402 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:23.222420 | orchestrator | 2026-04-04 00:25:23.222438 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-04 00:25:23.222456 | orchestrator | Saturday 04 April 2026 00:25:09 +0000 (0:00:01.206) 0:00:49.745 ******** 2026-04-04 00:25:23.222473 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:25:23.222491 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:25:23.222508 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:25:23.222526 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:25:23.222544 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:25:23.222562 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:25:23.222629 | orchestrator | changed: [testbed-manager] 2026-04-04 00:25:23.222649 | orchestrator | 2026-04-04 00:25:23.222667 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-04 00:25:23.222686 | orchestrator | Saturday 04 April 2026 00:25:20 +0000 (0:00:11.166) 0:01:00.911 ******** 2026-04-04 00:25:23.222704 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.222719 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.222729 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.222740 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.222751 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.222769 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.222785 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.222810 | orchestrator | 2026-04-04 00:25:23.222829 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-04 00:25:23.222846 | orchestrator | Saturday 04 April 2026 00:25:21 +0000 (0:00:01.370) 0:01:02.282 ******** 2026-04-04 00:25:23.222861 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.222877 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.222892 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.222908 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.222924 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.222940 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.222956 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.222971 | orchestrator | 2026-04-04 00:25:23.222987 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-04 00:25:23.223003 | orchestrator | Saturday 04 April 2026 00:25:22 +0000 (0:00:00.889) 0:01:03.172 ******** 2026-04-04 00:25:23.223019 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.223035 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.223051 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.223068 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.223084 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.223100 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.223117 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.223133 | orchestrator | 2026-04-04 00:25:23.223150 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-04 00:25:23.223168 | orchestrator | Saturday 04 April 2026 00:25:22 +0000 (0:00:00.194) 0:01:03.366 ******** 2026-04-04 00:25:23.223184 | orchestrator | ok: [testbed-manager] 2026-04-04 00:25:23.223200 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:25:23.223217 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:25:23.223243 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:25:23.223261 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:25:23.223277 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:25:23.223294 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:25:23.223310 | orchestrator | 2026-04-04 00:25:23.223326 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-04 00:25:23.223344 | orchestrator | Saturday 04 April 2026 00:25:22 +0000 (0:00:00.211) 0:01:03.577 ******** 2026-04-04 00:25:23.223361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:25:23.223379 | orchestrator | 2026-04-04 00:25:23.223414 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-04 00:27:57.881461 | orchestrator | Saturday 04 April 2026 00:25:23 +0000 (0:00:00.266) 0:01:03.843 ******** 2026-04-04 00:27:57.881584 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.881650 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.881664 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.881675 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.881686 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.881697 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.881708 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.881719 | orchestrator | 2026-04-04 00:27:57.881732 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-04 00:27:57.881766 | orchestrator | Saturday 04 April 2026 00:25:24 +0000 (0:00:01.618) 0:01:05.462 ******** 2026-04-04 00:27:57.881778 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:57.881791 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:57.881801 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:57.881812 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:57.881823 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:57.881833 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:57.881844 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:57.881855 | orchestrator | 2026-04-04 00:27:57.881867 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-04 00:27:57.881878 | orchestrator | Saturday 04 April 2026 00:25:25 +0000 (0:00:00.723) 0:01:06.186 ******** 2026-04-04 00:27:57.881889 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.881900 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.881911 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.881922 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.881932 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.881943 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.881953 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.881964 | orchestrator | 2026-04-04 00:27:57.881975 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-04 00:27:57.881986 | orchestrator | Saturday 04 April 2026 00:25:25 +0000 (0:00:00.283) 0:01:06.470 ******** 2026-04-04 00:27:57.881999 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.882012 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.882093 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.882138 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.882152 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.882164 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.882177 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.882187 | orchestrator | 2026-04-04 00:27:57.882198 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-04 00:27:57.882209 | orchestrator | Saturday 04 April 2026 00:25:27 +0000 (0:00:01.184) 0:01:07.655 ******** 2026-04-04 00:27:57.882220 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:57.882231 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:57.882242 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:57.882252 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:57.882263 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:57.882274 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:57.882284 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:57.882295 | orchestrator | 2026-04-04 00:27:57.882306 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-04 00:27:57.882317 | orchestrator | Saturday 04 April 2026 00:25:28 +0000 (0:00:01.693) 0:01:09.348 ******** 2026-04-04 00:27:57.882328 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.882339 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.882350 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.882360 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.882371 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.882382 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.882392 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.882403 | orchestrator | 2026-04-04 00:27:57.882414 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-04 00:27:57.882425 | orchestrator | Saturday 04 April 2026 00:25:31 +0000 (0:00:02.337) 0:01:11.686 ******** 2026-04-04 00:27:57.882436 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.882447 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.882458 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.882468 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.882479 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.882499 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.882519 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.882553 | orchestrator | 2026-04-04 00:27:57.882573 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-04 00:27:57.882592 | orchestrator | Saturday 04 April 2026 00:26:27 +0000 (0:00:56.023) 0:02:07.709 ******** 2026-04-04 00:27:57.882701 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:57.882714 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:27:57.882725 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:27:57.882736 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:27:57.882746 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:27:57.882757 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:27:57.882768 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:27:57.882786 | orchestrator | 2026-04-04 00:27:57.882803 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-04 00:27:57.882821 | orchestrator | Saturday 04 April 2026 00:27:43 +0000 (0:01:16.381) 0:03:24.091 ******** 2026-04-04 00:27:57.882839 | orchestrator | ok: [testbed-manager] 2026-04-04 00:27:57.882858 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.882877 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.882895 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.882909 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.882920 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.882931 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.882941 | orchestrator | 2026-04-04 00:27:57.882953 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-04 00:27:57.882964 | orchestrator | Saturday 04 April 2026 00:27:45 +0000 (0:00:02.006) 0:03:26.098 ******** 2026-04-04 00:27:57.882975 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:27:57.882986 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:27:57.882997 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:27:57.883007 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:27:57.883018 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:27:57.883029 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:27:57.883040 | orchestrator | changed: [testbed-manager] 2026-04-04 00:27:57.883050 | orchestrator | 2026-04-04 00:27:57.883061 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-04 00:27:57.883072 | orchestrator | Saturday 04 April 2026 00:27:56 +0000 (0:00:11.118) 0:03:37.217 ******** 2026-04-04 00:27:57.883123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-04 00:27:57.883146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-04 00:27:57.883161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-04 00:27:57.883225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-04 00:27:57.883249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-04 00:27:57.883264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-04 00:27:57.883276 | orchestrator | 2026-04-04 00:27:57.883287 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-04 00:27:57.883298 | orchestrator | Saturday 04 April 2026 00:27:57 +0000 (0:00:00.451) 0:03:37.668 ******** 2026-04-04 00:27:57.883309 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:27:57.883319 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:27:57.883331 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:27:57.883341 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:27:57.883352 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:27:57.883363 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:27:57.883373 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-04 00:27:57.883384 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:27:57.883395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:27:57.883416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:27:57.883428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:27:57.883438 | orchestrator | 2026-04-04 00:27:57.883449 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-04 00:27:57.883465 | orchestrator | Saturday 04 April 2026 00:27:57 +0000 (0:00:00.747) 0:03:38.415 ******** 2026-04-04 00:27:57.883476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:27:57.883489 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:27:57.883500 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:27:57.883511 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:27:57.883522 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:27:57.883541 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:28:03.797172 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:28:03.797276 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:28:03.797292 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:28:03.797304 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:28:03.797317 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:03.797330 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:28:03.797341 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:28:03.797352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:28:03.797388 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:28:03.797400 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:28:03.797411 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:28:03.797422 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:28:03.797434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:28:03.797445 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:28:03.797455 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:28:03.797466 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:28:03.797477 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:28:03.797488 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:28:03.797498 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:28:03.797509 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:28:03.797520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:28:03.797531 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:28:03.797541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:28:03.797553 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:28:03.797563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:28:03.797574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:28:03.797585 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:28:03.797596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-04 00:28:03.797691 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-04 00:28:03.797703 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-04 00:28:03.797715 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-04 00:28:03.797728 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-04 00:28:03.797744 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-04 00:28:03.797763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-04 00:28:03.797782 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-04 00:28:03.797800 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-04 00:28:03.797837 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-04 00:28:03.797858 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:28:03.797878 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:28:03.797898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:28:03.797913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-04 00:28:03.797937 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:28:03.797950 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:28:03.797990 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-04 00:28:03.798014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:28:03.798102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:28:03.798120 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-04 00:28:03.798139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798177 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798224 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-04 00:28:03.798235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:28:03.798245 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:28:03.798256 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-04 00:28:03.798273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:28:03.798292 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:28:03.798316 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-04 00:28:03.798342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:28:03.798359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:28:03.798377 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-04 00:28:03.798396 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:28:03.798415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:28:03.798434 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-04 00:28:03.798448 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:28:03.798459 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:28:03.798470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-04 00:28:03.798481 | orchestrator | 2026-04-04 00:28:03.798493 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-04 00:28:03.798504 | orchestrator | Saturday 04 April 2026 00:28:01 +0000 (0:00:03.858) 0:03:42.274 ******** 2026-04-04 00:28:03.798515 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798569 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798580 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798590 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-04 00:28:03.798631 | orchestrator | 2026-04-04 00:28:03.798644 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-04 00:28:03.798655 | orchestrator | Saturday 04 April 2026 00:28:03 +0000 (0:00:01.579) 0:03:43.853 ******** 2026-04-04 00:28:03.798666 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:03.798677 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:03.798696 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:03.798707 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:03.798718 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:28:03.798734 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:03.798751 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:28:03.798769 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:28:03.798788 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:03.798807 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:03.798838 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:17.365865 | orchestrator | 2026-04-04 00:28:17.365973 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-04 00:28:17.365991 | orchestrator | Saturday 04 April 2026 00:28:03 +0000 (0:00:00.603) 0:03:44.456 ******** 2026-04-04 00:28:17.366004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:17.366077 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:17.366094 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:17.366106 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:17.366117 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:28:17.366128 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:28:17.366139 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-04 00:28:17.366150 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:28:17.366161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:17.366172 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:17.366183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-04 00:28:17.366194 | orchestrator | 2026-04-04 00:28:17.366205 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-04 00:28:17.366217 | orchestrator | Saturday 04 April 2026 00:28:04 +0000 (0:00:00.517) 0:03:44.974 ******** 2026-04-04 00:28:17.366227 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:28:17.366238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:28:17.366249 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:17.366260 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:28:17.366271 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:28:17.366307 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-04 00:28:17.366318 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:28:17.366329 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:28:17.366340 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:28:17.366350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:28:17.366361 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-04 00:28:17.366371 | orchestrator | 2026-04-04 00:28:17.366382 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-04 00:28:17.366397 | orchestrator | Saturday 04 April 2026 00:28:05 +0000 (0:00:01.651) 0:03:46.626 ******** 2026-04-04 00:28:17.366409 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:17.366423 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:28:17.366435 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:28:17.366449 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:28:17.366461 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:28:17.366473 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:28:17.366485 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:28:17.366497 | orchestrator | 2026-04-04 00:28:17.366510 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-04 00:28:17.366524 | orchestrator | Saturday 04 April 2026 00:28:06 +0000 (0:00:00.332) 0:03:46.959 ******** 2026-04-04 00:28:17.366536 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:28:17.366550 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:28:17.366562 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:28:17.366574 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:28:17.366587 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:28:17.366599 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:28:17.366632 | orchestrator | ok: [testbed-manager] 2026-04-04 00:28:17.366646 | orchestrator | 2026-04-04 00:28:17.366656 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-04 00:28:17.366667 | orchestrator | Saturday 04 April 2026 00:28:11 +0000 (0:00:05.449) 0:03:52.408 ******** 2026-04-04 00:28:17.366678 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-04 00:28:17.366689 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:17.366700 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-04 00:28:17.366711 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:28:17.366722 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-04 00:28:17.366732 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:28:17.366743 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-04 00:28:17.366754 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-04 00:28:17.366765 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:28:17.366775 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-04 00:28:17.366786 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:28:17.366797 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:28:17.366807 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-04 00:28:17.366818 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:28:17.366829 | orchestrator | 2026-04-04 00:28:17.366839 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-04 00:28:17.366850 | orchestrator | Saturday 04 April 2026 00:28:12 +0000 (0:00:00.330) 0:03:52.739 ******** 2026-04-04 00:28:17.366861 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-04 00:28:17.366872 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-04 00:28:17.366883 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-04 00:28:17.366912 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-04 00:28:17.366924 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-04 00:28:17.366960 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-04 00:28:17.366993 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-04 00:28:17.367004 | orchestrator | 2026-04-04 00:28:17.367015 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-04 00:28:17.367026 | orchestrator | Saturday 04 April 2026 00:28:13 +0000 (0:00:01.098) 0:03:53.837 ******** 2026-04-04 00:28:17.367040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:28:17.367053 | orchestrator | 2026-04-04 00:28:17.367064 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-04 00:28:17.367075 | orchestrator | Saturday 04 April 2026 00:28:13 +0000 (0:00:00.466) 0:03:54.304 ******** 2026-04-04 00:28:17.367086 | orchestrator | ok: [testbed-manager] 2026-04-04 00:28:17.367097 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:28:17.367108 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:28:17.367118 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:28:17.367129 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:28:17.367140 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:28:17.367150 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:28:17.367161 | orchestrator | 2026-04-04 00:28:17.367172 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-04 00:28:17.367183 | orchestrator | Saturday 04 April 2026 00:28:15 +0000 (0:00:01.331) 0:03:55.636 ******** 2026-04-04 00:28:17.367193 | orchestrator | ok: [testbed-manager] 2026-04-04 00:28:17.367204 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:28:17.367215 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:28:17.367225 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:28:17.367236 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:28:17.367246 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:28:17.367257 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:28:17.367268 | orchestrator | 2026-04-04 00:28:17.367278 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-04 00:28:17.367289 | orchestrator | Saturday 04 April 2026 00:28:15 +0000 (0:00:00.642) 0:03:56.279 ******** 2026-04-04 00:28:17.367300 | orchestrator | changed: [testbed-manager] 2026-04-04 00:28:17.367327 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:28:17.367338 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:28:17.367349 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:28:17.367360 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:28:17.367371 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:28:17.367381 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:28:17.367392 | orchestrator | 2026-04-04 00:28:17.367403 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-04 00:28:17.367413 | orchestrator | Saturday 04 April 2026 00:28:16 +0000 (0:00:00.614) 0:03:56.893 ******** 2026-04-04 00:28:17.367424 | orchestrator | ok: [testbed-manager] 2026-04-04 00:28:17.367435 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:28:17.367446 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:28:17.367456 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:28:17.367467 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:28:17.367478 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:28:17.367488 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:28:17.367499 | orchestrator | 2026-04-04 00:28:17.367510 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-04 00:28:17.367521 | orchestrator | Saturday 04 April 2026 00:28:16 +0000 (0:00:00.568) 0:03:57.461 ******** 2026-04-04 00:28:17.367535 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261100.886609, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:17.367562 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261125.0455534, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:17.367575 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261126.5175557, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:17.367670 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261124.741563, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599774 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261132.400207, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599892 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261128.1121366, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599909 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1775261115.648408, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599922 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599960 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599987 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.599999 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.600030 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.600042 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.600054 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 00:28:22.600066 | orchestrator | 2026-04-04 00:28:22.600079 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-04 00:28:22.600092 | orchestrator | Saturday 04 April 2026 00:28:17 +0000 (0:00:01.007) 0:03:58.469 ******** 2026-04-04 00:28:22.600103 | orchestrator | changed: [testbed-manager] 2026-04-04 00:28:22.600116 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:28:22.600127 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:28:22.600145 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:28:22.600156 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:28:22.600167 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:28:22.600178 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:28:22.600188 | orchestrator | 2026-04-04 00:28:22.600200 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-04 00:28:22.600211 | orchestrator | Saturday 04 April 2026 00:28:18 +0000 (0:00:01.088) 0:03:59.557 ******** 2026-04-04 00:28:22.600222 | orchestrator | changed: [testbed-manager] 2026-04-04 00:28:22.600233 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:28:22.600243 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:28:22.600256 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:28:22.600268 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:28:22.600280 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:28:22.600292 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:28:22.600304 | orchestrator | 2026-04-04 00:28:22.600317 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-04 00:28:22.600329 | orchestrator | Saturday 04 April 2026 00:28:20 +0000 (0:00:01.123) 0:04:00.681 ******** 2026-04-04 00:28:22.600342 | orchestrator | changed: [testbed-manager] 2026-04-04 00:28:22.600354 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:28:22.600367 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:28:22.600379 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:28:22.600390 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:28:22.600402 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:28:22.600414 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:28:22.600426 | orchestrator | 2026-04-04 00:28:22.600439 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-04 00:28:22.600458 | orchestrator | Saturday 04 April 2026 00:28:21 +0000 (0:00:01.109) 0:04:01.790 ******** 2026-04-04 00:28:22.600471 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:28:22.600484 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:28:22.600496 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:28:22.600507 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:28:22.600520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:28:22.600532 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:28:22.600545 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:28:22.600557 | orchestrator | 2026-04-04 00:28:22.600570 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-04 00:28:22.600583 | orchestrator | Saturday 04 April 2026 00:28:21 +0000 (0:00:00.332) 0:04:02.122 ******** 2026-04-04 00:28:22.600596 | orchestrator | ok: [testbed-manager] 2026-04-04 00:28:22.600635 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:28:22.600646 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:28:22.600657 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:28:22.600667 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:28:22.600678 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:28:22.600688 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:28:22.600699 | orchestrator | 2026-04-04 00:28:22.600710 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-04 00:28:22.600721 | orchestrator | Saturday 04 April 2026 00:28:22 +0000 (0:00:00.717) 0:04:02.840 ******** 2026-04-04 00:28:22.600734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:28:22.600746 | orchestrator | 2026-04-04 00:28:22.600757 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-04 00:28:22.600775 | orchestrator | Saturday 04 April 2026 00:28:22 +0000 (0:00:00.383) 0:04:03.223 ******** 2026-04-04 00:29:36.754735 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.754887 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:36.754920 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:36.754968 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:36.754980 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:36.754991 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:36.755002 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:36.755014 | orchestrator | 2026-04-04 00:29:36.755028 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-04 00:29:36.755041 | orchestrator | Saturday 04 April 2026 00:28:31 +0000 (0:00:08.464) 0:04:11.688 ******** 2026-04-04 00:29:36.755051 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.755063 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.755074 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755084 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755095 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.755106 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.755117 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.755127 | orchestrator | 2026-04-04 00:29:36.755138 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-04 00:29:36.755149 | orchestrator | Saturday 04 April 2026 00:28:32 +0000 (0:00:01.458) 0:04:13.147 ******** 2026-04-04 00:29:36.755160 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.755173 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755185 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.755204 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.755223 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755241 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.755259 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.755276 | orchestrator | 2026-04-04 00:29:36.755296 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-04 00:29:36.755313 | orchestrator | Saturday 04 April 2026 00:28:33 +0000 (0:00:00.970) 0:04:14.118 ******** 2026-04-04 00:29:36.755332 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.755350 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.755368 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755460 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755483 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.755506 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.755527 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.755550 | orchestrator | 2026-04-04 00:29:36.755566 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-04 00:29:36.755584 | orchestrator | Saturday 04 April 2026 00:28:33 +0000 (0:00:00.266) 0:04:14.384 ******** 2026-04-04 00:29:36.755600 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.755652 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.755672 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755688 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755704 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.755720 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.755736 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.755751 | orchestrator | 2026-04-04 00:29:36.755767 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-04 00:29:36.755783 | orchestrator | Saturday 04 April 2026 00:28:34 +0000 (0:00:00.277) 0:04:14.662 ******** 2026-04-04 00:29:36.755799 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.755815 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.755831 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755847 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755863 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.755879 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.755894 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.755910 | orchestrator | 2026-04-04 00:29:36.755927 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-04 00:29:36.755943 | orchestrator | Saturday 04 April 2026 00:28:34 +0000 (0:00:00.275) 0:04:14.938 ******** 2026-04-04 00:29:36.755959 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.755975 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.755990 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.756023 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.756039 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.756059 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.756078 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.756097 | orchestrator | 2026-04-04 00:29:36.756115 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-04 00:29:36.756135 | orchestrator | Saturday 04 April 2026 00:28:38 +0000 (0:00:04.587) 0:04:19.525 ******** 2026-04-04 00:29:36.756157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:29:36.756179 | orchestrator | 2026-04-04 00:29:36.756198 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-04 00:29:36.756217 | orchestrator | Saturday 04 April 2026 00:28:39 +0000 (0:00:00.371) 0:04:19.897 ******** 2026-04-04 00:29:36.756236 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756254 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-04 00:29:36.756272 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756290 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:36.756309 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-04 00:29:36.756326 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756342 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-04 00:29:36.756359 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:36.756376 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756392 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-04 00:29:36.756410 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:36.756427 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:36.756444 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756463 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-04 00:29:36.756481 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756497 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:36.756544 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-04 00:29:36.756565 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:36.756582 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-04 00:29:36.756601 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-04 00:29:36.756643 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:36.756661 | orchestrator | 2026-04-04 00:29:36.756681 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-04 00:29:36.756701 | orchestrator | Saturday 04 April 2026 00:28:39 +0000 (0:00:00.330) 0:04:20.227 ******** 2026-04-04 00:29:36.756722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:29:36.756742 | orchestrator | 2026-04-04 00:29:36.756761 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-04 00:29:36.756781 | orchestrator | Saturday 04 April 2026 00:28:40 +0000 (0:00:00.462) 0:04:20.689 ******** 2026-04-04 00:29:36.756800 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-04 00:29:36.756819 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-04 00:29:36.756836 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:36.756854 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-04 00:29:36.756870 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:36.756887 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:36.756918 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-04 00:29:36.756936 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-04 00:29:36.756954 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:36.756972 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:36.757014 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-04 00:29:36.757031 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:36.757049 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-04 00:29:36.757066 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:36.757083 | orchestrator | 2026-04-04 00:29:36.757100 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-04 00:29:36.757118 | orchestrator | Saturday 04 April 2026 00:28:40 +0000 (0:00:00.317) 0:04:21.007 ******** 2026-04-04 00:29:36.757136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:29:36.757154 | orchestrator | 2026-04-04 00:29:36.757172 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-04 00:29:36.757188 | orchestrator | Saturday 04 April 2026 00:28:40 +0000 (0:00:00.400) 0:04:21.407 ******** 2026-04-04 00:29:36.757205 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:36.757224 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:36.757244 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:36.757264 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:36.757283 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:36.757302 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:36.757321 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:36.757342 | orchestrator | 2026-04-04 00:29:36.757357 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-04 00:29:36.757374 | orchestrator | Saturday 04 April 2026 00:29:14 +0000 (0:00:33.354) 0:04:54.762 ******** 2026-04-04 00:29:36.757392 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:36.757411 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:36.757429 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:36.757447 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:36.757466 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:36.757484 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:36.757511 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:36.757531 | orchestrator | 2026-04-04 00:29:36.757550 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-04 00:29:36.757569 | orchestrator | Saturday 04 April 2026 00:29:22 +0000 (0:00:07.959) 0:05:02.722 ******** 2026-04-04 00:29:36.757587 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:36.757607 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:36.757682 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:36.757694 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:36.757705 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:36.757716 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:36.757726 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:36.757737 | orchestrator | 2026-04-04 00:29:36.757748 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-04 00:29:36.757759 | orchestrator | Saturday 04 April 2026 00:29:29 +0000 (0:00:07.011) 0:05:09.733 ******** 2026-04-04 00:29:36.757770 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:36.757781 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:36.757792 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:36.757803 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:36.757814 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:36.757824 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:36.757835 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:36.757845 | orchestrator | 2026-04-04 00:29:36.757855 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-04 00:29:36.757877 | orchestrator | Saturday 04 April 2026 00:29:30 +0000 (0:00:01.608) 0:05:11.342 ******** 2026-04-04 00:29:36.757886 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:36.757896 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:36.757905 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:36.757915 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:36.757925 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:36.757934 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:36.757944 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:36.757953 | orchestrator | 2026-04-04 00:29:36.757977 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-04 00:29:47.863249 | orchestrator | Saturday 04 April 2026 00:29:36 +0000 (0:00:06.033) 0:05:17.376 ******** 2026-04-04 00:29:47.863384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:29:47.863423 | orchestrator | 2026-04-04 00:29:47.863448 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-04 00:29:47.863467 | orchestrator | Saturday 04 April 2026 00:29:37 +0000 (0:00:00.402) 0:05:17.778 ******** 2026-04-04 00:29:47.863485 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:47.863506 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:47.863521 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:47.863536 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:47.863552 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:47.863569 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:47.863585 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:47.863602 | orchestrator | 2026-04-04 00:29:47.863667 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-04 00:29:47.863687 | orchestrator | Saturday 04 April 2026 00:29:37 +0000 (0:00:00.719) 0:05:18.497 ******** 2026-04-04 00:29:47.863707 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:47.863725 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:47.863744 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:47.863763 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:47.863781 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:47.863802 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:47.863817 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:47.863830 | orchestrator | 2026-04-04 00:29:47.863843 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-04 00:29:47.863857 | orchestrator | Saturday 04 April 2026 00:29:39 +0000 (0:00:01.914) 0:05:20.412 ******** 2026-04-04 00:29:47.863869 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:29:47.863883 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:29:47.863895 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:29:47.863906 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:29:47.863917 | orchestrator | changed: [testbed-manager] 2026-04-04 00:29:47.863928 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:29:47.863939 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:29:47.863949 | orchestrator | 2026-04-04 00:29:47.863960 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-04 00:29:47.863972 | orchestrator | Saturday 04 April 2026 00:29:40 +0000 (0:00:00.806) 0:05:21.218 ******** 2026-04-04 00:29:47.863982 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.863993 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.864004 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.864015 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:47.864026 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:47.864037 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:47.864055 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:47.864073 | orchestrator | 2026-04-04 00:29:47.864089 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-04 00:29:47.864188 | orchestrator | Saturday 04 April 2026 00:29:40 +0000 (0:00:00.347) 0:05:21.566 ******** 2026-04-04 00:29:47.864202 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.864213 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.864224 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.864234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:47.864245 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:47.864256 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:47.864266 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:47.864277 | orchestrator | 2026-04-04 00:29:47.864287 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-04 00:29:47.864298 | orchestrator | Saturday 04 April 2026 00:29:41 +0000 (0:00:00.414) 0:05:21.980 ******** 2026-04-04 00:29:47.864309 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:47.864320 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:47.864330 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:47.864341 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:47.864351 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:47.864376 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:47.864386 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:47.864397 | orchestrator | 2026-04-04 00:29:47.864407 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-04 00:29:47.864418 | orchestrator | Saturday 04 April 2026 00:29:41 +0000 (0:00:00.458) 0:05:22.439 ******** 2026-04-04 00:29:47.864429 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.864439 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.864450 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.864460 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:47.864471 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:47.864481 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:47.864492 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:47.864502 | orchestrator | 2026-04-04 00:29:47.864513 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-04 00:29:47.864525 | orchestrator | Saturday 04 April 2026 00:29:42 +0000 (0:00:00.317) 0:05:22.756 ******** 2026-04-04 00:29:47.864536 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:47.864546 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:47.864557 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:47.864567 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:47.864578 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:47.864588 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:47.864599 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:47.864609 | orchestrator | 2026-04-04 00:29:47.864677 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-04 00:29:47.864689 | orchestrator | Saturday 04 April 2026 00:29:42 +0000 (0:00:00.325) 0:05:23.081 ******** 2026-04-04 00:29:47.864700 | orchestrator | ok: [testbed-manager] =>  2026-04-04 00:29:47.864711 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864722 | orchestrator | ok: [testbed-node-0] =>  2026-04-04 00:29:47.864732 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864743 | orchestrator | ok: [testbed-node-1] =>  2026-04-04 00:29:47.864754 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864764 | orchestrator | ok: [testbed-node-2] =>  2026-04-04 00:29:47.864775 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864808 | orchestrator | ok: [testbed-node-3] =>  2026-04-04 00:29:47.864820 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864831 | orchestrator | ok: [testbed-node-4] =>  2026-04-04 00:29:47.864841 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864852 | orchestrator | ok: [testbed-node-5] =>  2026-04-04 00:29:47.864862 | orchestrator |  docker_version: 5:27.5.1 2026-04-04 00:29:47.864873 | orchestrator | 2026-04-04 00:29:47.864884 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-04 00:29:47.864895 | orchestrator | Saturday 04 April 2026 00:29:42 +0000 (0:00:00.262) 0:05:23.343 ******** 2026-04-04 00:29:47.864915 | orchestrator | ok: [testbed-manager] =>  2026-04-04 00:29:47.864926 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.864937 | orchestrator | ok: [testbed-node-0] =>  2026-04-04 00:29:47.864947 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.864958 | orchestrator | ok: [testbed-node-1] =>  2026-04-04 00:29:47.864969 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.864979 | orchestrator | ok: [testbed-node-2] =>  2026-04-04 00:29:47.864990 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.865000 | orchestrator | ok: [testbed-node-3] =>  2026-04-04 00:29:47.865011 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.865021 | orchestrator | ok: [testbed-node-4] =>  2026-04-04 00:29:47.865032 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.865042 | orchestrator | ok: [testbed-node-5] =>  2026-04-04 00:29:47.865052 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-04 00:29:47.865063 | orchestrator | 2026-04-04 00:29:47.865074 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-04 00:29:47.865085 | orchestrator | Saturday 04 April 2026 00:29:42 +0000 (0:00:00.267) 0:05:23.610 ******** 2026-04-04 00:29:47.865095 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.865106 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.865116 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.865127 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:47.865137 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:47.865148 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:47.865158 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:47.865169 | orchestrator | 2026-04-04 00:29:47.865180 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-04 00:29:47.865191 | orchestrator | Saturday 04 April 2026 00:29:43 +0000 (0:00:00.220) 0:05:23.831 ******** 2026-04-04 00:29:47.865202 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.865212 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.865223 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.865234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:29:47.865244 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:29:47.865255 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:29:47.865265 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:29:47.865276 | orchestrator | 2026-04-04 00:29:47.865286 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-04 00:29:47.865297 | orchestrator | Saturday 04 April 2026 00:29:43 +0000 (0:00:00.234) 0:05:24.065 ******** 2026-04-04 00:29:47.865311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:29:47.865324 | orchestrator | 2026-04-04 00:29:47.865336 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-04 00:29:47.865346 | orchestrator | Saturday 04 April 2026 00:29:43 +0000 (0:00:00.359) 0:05:24.425 ******** 2026-04-04 00:29:47.865357 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:47.865368 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:47.865378 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:47.865389 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:47.865399 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:47.865410 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:47.865420 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:47.865431 | orchestrator | 2026-04-04 00:29:47.865441 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-04 00:29:47.865452 | orchestrator | Saturday 04 April 2026 00:29:44 +0000 (0:00:00.773) 0:05:25.198 ******** 2026-04-04 00:29:47.865469 | orchestrator | ok: [testbed-manager] 2026-04-04 00:29:47.865480 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:29:47.865491 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:29:47.865501 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:29:47.865518 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:29:47.865529 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:29:47.865539 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:29:47.865550 | orchestrator | 2026-04-04 00:29:47.865561 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-04 00:29:47.865573 | orchestrator | Saturday 04 April 2026 00:29:47 +0000 (0:00:02.986) 0:05:28.185 ******** 2026-04-04 00:29:47.865584 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-04 00:29:47.865595 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-04 00:29:47.865606 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-04 00:29:47.865632 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-04 00:29:47.865644 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-04 00:29:47.865654 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-04 00:29:47.865665 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:29:47.865676 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-04 00:29:47.865687 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-04 00:29:47.865697 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-04 00:29:47.865708 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:29:47.865719 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-04 00:29:47.865729 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-04 00:29:47.865740 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-04 00:29:47.865751 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:29:47.865761 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-04 00:29:47.865779 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-04 00:30:50.983324 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:50.983460 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-04 00:30:50.983471 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-04 00:30:50.983479 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-04 00:30:50.983486 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-04 00:30:50.983492 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:50.983499 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:50.983506 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-04 00:30:50.983512 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-04 00:30:50.983520 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-04 00:30:50.983526 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:50.983533 | orchestrator | 2026-04-04 00:30:50.983541 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-04 00:30:50.983549 | orchestrator | Saturday 04 April 2026 00:29:48 +0000 (0:00:00.495) 0:05:28.680 ******** 2026-04-04 00:30:50.983556 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.983562 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.983569 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.983576 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.983657 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.983666 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.983671 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.983677 | orchestrator | 2026-04-04 00:30:50.983684 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-04 00:30:50.983691 | orchestrator | Saturday 04 April 2026 00:29:54 +0000 (0:00:06.835) 0:05:35.516 ******** 2026-04-04 00:30:50.983697 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.983703 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.983709 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.983715 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.983721 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.983749 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.983756 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.983761 | orchestrator | 2026-04-04 00:30:50.983767 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-04 00:30:50.983773 | orchestrator | Saturday 04 April 2026 00:29:55 +0000 (0:00:01.056) 0:05:36.573 ******** 2026-04-04 00:30:50.983779 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.983785 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.983790 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.983796 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.983802 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.983807 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.983813 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.983818 | orchestrator | 2026-04-04 00:30:50.983824 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-04 00:30:50.983832 | orchestrator | Saturday 04 April 2026 00:30:03 +0000 (0:00:08.013) 0:05:44.586 ******** 2026-04-04 00:30:50.983842 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:50.983852 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.983861 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.983870 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.983879 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.983889 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.983899 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.983908 | orchestrator | 2026-04-04 00:30:50.983918 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-04 00:30:50.983927 | orchestrator | Saturday 04 April 2026 00:30:07 +0000 (0:00:03.557) 0:05:48.143 ******** 2026-04-04 00:30:50.983937 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.983946 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.983955 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.983965 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.983974 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.983983 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.983992 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984001 | orchestrator | 2026-04-04 00:30:50.984025 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-04 00:30:50.984035 | orchestrator | Saturday 04 April 2026 00:30:08 +0000 (0:00:01.392) 0:05:49.536 ******** 2026-04-04 00:30:50.984044 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.984054 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984062 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984072 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984080 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984090 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984098 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984107 | orchestrator | 2026-04-04 00:30:50.984117 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-04 00:30:50.984127 | orchestrator | Saturday 04 April 2026 00:30:10 +0000 (0:00:01.333) 0:05:50.870 ******** 2026-04-04 00:30:50.984135 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:50.984145 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:50.984153 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:50.984163 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:50.984172 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:50.984182 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:50.984190 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:50.984198 | orchestrator | 2026-04-04 00:30:50.984204 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-04 00:30:50.984210 | orchestrator | Saturday 04 April 2026 00:30:10 +0000 (0:00:00.590) 0:05:51.460 ******** 2026-04-04 00:30:50.984215 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.984222 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984227 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984239 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984245 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984251 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984256 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984262 | orchestrator | 2026-04-04 00:30:50.984269 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-04 00:30:50.984293 | orchestrator | Saturday 04 April 2026 00:30:21 +0000 (0:00:10.614) 0:06:02.075 ******** 2026-04-04 00:30:50.984300 | orchestrator | changed: [testbed-manager] 2026-04-04 00:30:50.984305 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984311 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984317 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984322 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984328 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984334 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984340 | orchestrator | 2026-04-04 00:30:50.984345 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-04 00:30:50.984351 | orchestrator | Saturday 04 April 2026 00:30:22 +0000 (0:00:01.171) 0:06:03.246 ******** 2026-04-04 00:30:50.984357 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.984363 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984368 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984374 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984380 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984385 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984391 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984397 | orchestrator | 2026-04-04 00:30:50.984403 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-04 00:30:50.984408 | orchestrator | Saturday 04 April 2026 00:30:32 +0000 (0:00:10.252) 0:06:13.499 ******** 2026-04-04 00:30:50.984414 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.984420 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984426 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984431 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984436 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984443 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984448 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984455 | orchestrator | 2026-04-04 00:30:50.984460 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-04 00:30:50.984466 | orchestrator | Saturday 04 April 2026 00:30:44 +0000 (0:00:11.155) 0:06:24.654 ******** 2026-04-04 00:30:50.984472 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-04 00:30:50.984478 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-04 00:30:50.984484 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-04 00:30:50.984489 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-04 00:30:50.984495 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-04 00:30:50.984501 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-04 00:30:50.984507 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-04 00:30:50.984513 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-04 00:30:50.984519 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-04 00:30:50.984524 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-04 00:30:50.984530 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-04 00:30:50.984536 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-04 00:30:50.984542 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-04 00:30:50.984548 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-04 00:30:50.984554 | orchestrator | 2026-04-04 00:30:50.984559 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-04 00:30:50.984565 | orchestrator | Saturday 04 April 2026 00:30:45 +0000 (0:00:01.303) 0:06:25.958 ******** 2026-04-04 00:30:50.984610 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:50.984618 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:50.984624 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:50.984630 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:50.984637 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:50.984643 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:50.984650 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:50.984656 | orchestrator | 2026-04-04 00:30:50.984662 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-04 00:30:50.984668 | orchestrator | Saturday 04 April 2026 00:30:45 +0000 (0:00:00.643) 0:06:26.601 ******** 2026-04-04 00:30:50.984675 | orchestrator | ok: [testbed-manager] 2026-04-04 00:30:50.984682 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:30:50.984688 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:30:50.984694 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:30:50.984701 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:30:50.984707 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:30:50.984714 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:30:50.984720 | orchestrator | 2026-04-04 00:30:50.984727 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-04 00:30:50.984735 | orchestrator | Saturday 04 April 2026 00:30:50 +0000 (0:00:04.193) 0:06:30.794 ******** 2026-04-04 00:30:50.984742 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:50.984749 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:50.984755 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:50.984761 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:30:50.984767 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:30:50.984773 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:30:50.984780 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:30:50.984786 | orchestrator | 2026-04-04 00:30:50.984794 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-04 00:30:50.984800 | orchestrator | Saturday 04 April 2026 00:30:50 +0000 (0:00:00.550) 0:06:31.345 ******** 2026-04-04 00:30:50.984807 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-04 00:30:50.984814 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-04 00:30:50.984820 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:30:50.984863 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-04 00:30:50.984869 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-04 00:30:50.984875 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:30:50.984882 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-04 00:30:50.984888 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-04 00:30:50.984895 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:30:50.984910 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-04 00:31:11.122834 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-04 00:31:11.122948 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:11.122964 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-04 00:31:11.122976 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-04 00:31:11.122987 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:11.122999 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-04 00:31:11.123010 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-04 00:31:11.123021 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:11.123032 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-04 00:31:11.123042 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-04 00:31:11.123053 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:11.123064 | orchestrator | 2026-04-04 00:31:11.123077 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-04 00:31:11.123115 | orchestrator | Saturday 04 April 2026 00:30:51 +0000 (0:00:00.546) 0:06:31.892 ******** 2026-04-04 00:31:11.123127 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:11.123137 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:11.123155 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:11.123175 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:11.123204 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:11.123224 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:11.123243 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:11.123262 | orchestrator | 2026-04-04 00:31:11.123283 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-04 00:31:11.123303 | orchestrator | Saturday 04 April 2026 00:30:51 +0000 (0:00:00.509) 0:06:32.401 ******** 2026-04-04 00:31:11.123323 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:11.123344 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:11.123365 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:11.123386 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:11.123406 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:11.123427 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:11.123449 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:11.123471 | orchestrator | 2026-04-04 00:31:11.123495 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-04 00:31:11.123517 | orchestrator | Saturday 04 April 2026 00:30:52 +0000 (0:00:00.610) 0:06:33.012 ******** 2026-04-04 00:31:11.123536 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:11.123550 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:11.123603 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:11.123616 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:11.123629 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:11.123641 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:11.123652 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:11.123662 | orchestrator | 2026-04-04 00:31:11.123673 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-04 00:31:11.123684 | orchestrator | Saturday 04 April 2026 00:30:52 +0000 (0:00:00.531) 0:06:33.544 ******** 2026-04-04 00:31:11.123695 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.123706 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.123717 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.123727 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.123738 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.123748 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.123759 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.123769 | orchestrator | 2026-04-04 00:31:11.123780 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-04 00:31:11.123791 | orchestrator | Saturday 04 April 2026 00:30:54 +0000 (0:00:01.873) 0:06:35.417 ******** 2026-04-04 00:31:11.123802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:11.123815 | orchestrator | 2026-04-04 00:31:11.123843 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-04 00:31:11.123855 | orchestrator | Saturday 04 April 2026 00:30:55 +0000 (0:00:00.908) 0:06:36.325 ******** 2026-04-04 00:31:11.123865 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.123876 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:11.123887 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:11.123897 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:11.123908 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:11.123919 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:11.123930 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:11.123940 | orchestrator | 2026-04-04 00:31:11.123951 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-04 00:31:11.123973 | orchestrator | Saturday 04 April 2026 00:30:56 +0000 (0:00:01.086) 0:06:37.412 ******** 2026-04-04 00:31:11.123984 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.123995 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:11.124006 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:11.124016 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:11.124027 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:11.124037 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:11.124048 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:11.124058 | orchestrator | 2026-04-04 00:31:11.124069 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-04 00:31:11.124080 | orchestrator | Saturday 04 April 2026 00:30:57 +0000 (0:00:00.903) 0:06:38.315 ******** 2026-04-04 00:31:11.124091 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.124101 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:11.124112 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:11.124122 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:11.124133 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:11.124143 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:11.124154 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:11.124164 | orchestrator | 2026-04-04 00:31:11.124175 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-04 00:31:11.124205 | orchestrator | Saturday 04 April 2026 00:30:59 +0000 (0:00:01.478) 0:06:39.794 ******** 2026-04-04 00:31:11.124217 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:11.124228 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.124238 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.124249 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.124259 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.124270 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.124281 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.124291 | orchestrator | 2026-04-04 00:31:11.124302 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-04 00:31:11.124313 | orchestrator | Saturday 04 April 2026 00:31:00 +0000 (0:00:01.492) 0:06:41.287 ******** 2026-04-04 00:31:11.124324 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.124334 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:11.124345 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:11.124355 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:11.124366 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:11.124377 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:11.124387 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:11.124398 | orchestrator | 2026-04-04 00:31:11.124408 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-04 00:31:11.124419 | orchestrator | Saturday 04 April 2026 00:31:02 +0000 (0:00:01.455) 0:06:42.742 ******** 2026-04-04 00:31:11.124438 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:11.124457 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:11.124477 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:11.124496 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:11.124516 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:11.124536 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:11.124557 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:11.124602 | orchestrator | 2026-04-04 00:31:11.124620 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-04 00:31:11.124639 | orchestrator | Saturday 04 April 2026 00:31:03 +0000 (0:00:01.692) 0:06:44.434 ******** 2026-04-04 00:31:11.124658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:11.124677 | orchestrator | 2026-04-04 00:31:11.124695 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-04 00:31:11.124715 | orchestrator | Saturday 04 April 2026 00:31:04 +0000 (0:00:00.950) 0:06:45.385 ******** 2026-04-04 00:31:11.124754 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.124774 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.124792 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.124810 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.124829 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.124849 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.124867 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.124886 | orchestrator | 2026-04-04 00:31:11.124905 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-04 00:31:11.124922 | orchestrator | Saturday 04 April 2026 00:31:06 +0000 (0:00:01.478) 0:06:46.863 ******** 2026-04-04 00:31:11.124933 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.124944 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.124955 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.124965 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.124976 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.124987 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.125004 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.125022 | orchestrator | 2026-04-04 00:31:11.125039 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-04 00:31:11.125058 | orchestrator | Saturday 04 April 2026 00:31:07 +0000 (0:00:01.353) 0:06:48.217 ******** 2026-04-04 00:31:11.125076 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.125095 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.125113 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.125132 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.125150 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.125170 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.125188 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.125207 | orchestrator | 2026-04-04 00:31:11.125225 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-04 00:31:11.125244 | orchestrator | Saturday 04 April 2026 00:31:08 +0000 (0:00:01.263) 0:06:49.481 ******** 2026-04-04 00:31:11.125263 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:11.125281 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:11.125299 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:11.125317 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:11.125335 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:11.125353 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:11.125371 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:11.125389 | orchestrator | 2026-04-04 00:31:11.125407 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-04 00:31:11.125425 | orchestrator | Saturday 04 April 2026 00:31:09 +0000 (0:00:01.124) 0:06:50.606 ******** 2026-04-04 00:31:11.125444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:11.125464 | orchestrator | 2026-04-04 00:31:11.125482 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:11.125501 | orchestrator | Saturday 04 April 2026 00:31:10 +0000 (0:00:00.878) 0:06:51.485 ******** 2026-04-04 00:31:11.125519 | orchestrator | 2026-04-04 00:31:11.125537 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:11.125556 | orchestrator | Saturday 04 April 2026 00:31:10 +0000 (0:00:00.039) 0:06:51.525 ******** 2026-04-04 00:31:11.125677 | orchestrator | 2026-04-04 00:31:11.125698 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:11.125716 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.181) 0:06:51.706 ******** 2026-04-04 00:31:11.125735 | orchestrator | 2026-04-04 00:31:11.125755 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:11.125789 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.038) 0:06:51.745 ******** 2026-04-04 00:31:38.013393 | orchestrator | 2026-04-04 00:31:38.013594 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:38.013624 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.038) 0:06:51.784 ******** 2026-04-04 00:31:38.013642 | orchestrator | 2026-04-04 00:31:38.013660 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:38.013678 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.046) 0:06:51.831 ******** 2026-04-04 00:31:38.013695 | orchestrator | 2026-04-04 00:31:38.013712 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-04 00:31:38.013730 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.038) 0:06:51.869 ******** 2026-04-04 00:31:38.013748 | orchestrator | 2026-04-04 00:31:38.013765 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-04 00:31:38.013783 | orchestrator | Saturday 04 April 2026 00:31:11 +0000 (0:00:00.038) 0:06:51.908 ******** 2026-04-04 00:31:38.013800 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.013820 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.013837 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.013854 | orchestrator | 2026-04-04 00:31:38.013873 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-04 00:31:38.013891 | orchestrator | Saturday 04 April 2026 00:31:12 +0000 (0:00:01.216) 0:06:53.124 ******** 2026-04-04 00:31:38.013910 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.013930 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.013945 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.013962 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.013979 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.013998 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.014079 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.014105 | orchestrator | 2026-04-04 00:31:38.014122 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-04 00:31:38.014138 | orchestrator | Saturday 04 April 2026 00:31:13 +0000 (0:00:01.331) 0:06:54.455 ******** 2026-04-04 00:31:38.014155 | orchestrator | changed: [testbed-manager] 2026-04-04 00:31:38.014170 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.014187 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.014203 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.014218 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.014228 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.014237 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.014247 | orchestrator | 2026-04-04 00:31:38.014257 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-04 00:31:38.014267 | orchestrator | Saturday 04 April 2026 00:31:15 +0000 (0:00:01.183) 0:06:55.639 ******** 2026-04-04 00:31:38.014276 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.014286 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.014296 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.014305 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.014315 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.014324 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.014334 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.014344 | orchestrator | 2026-04-04 00:31:38.014354 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-04 00:31:38.014363 | orchestrator | Saturday 04 April 2026 00:31:17 +0000 (0:00:02.477) 0:06:58.116 ******** 2026-04-04 00:31:38.014373 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.014382 | orchestrator | 2026-04-04 00:31:38.014392 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-04 00:31:38.014402 | orchestrator | Saturday 04 April 2026 00:31:17 +0000 (0:00:00.095) 0:06:58.212 ******** 2026-04-04 00:31:38.014412 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.014421 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.014431 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.014441 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.014466 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.014476 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:31:38.014485 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.014495 | orchestrator | 2026-04-04 00:31:38.014519 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-04 00:31:38.014563 | orchestrator | Saturday 04 April 2026 00:31:18 +0000 (0:00:01.288) 0:06:59.501 ******** 2026-04-04 00:31:38.014583 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.014599 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.014617 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.014629 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.014639 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.014648 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.014657 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.014667 | orchestrator | 2026-04-04 00:31:38.014677 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-04 00:31:38.014687 | orchestrator | Saturday 04 April 2026 00:31:19 +0000 (0:00:00.504) 0:07:00.006 ******** 2026-04-04 00:31:38.014698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:38.014709 | orchestrator | 2026-04-04 00:31:38.014719 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-04 00:31:38.014729 | orchestrator | Saturday 04 April 2026 00:31:20 +0000 (0:00:00.871) 0:07:00.877 ******** 2026-04-04 00:31:38.014739 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.014748 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.014758 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.014767 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.014777 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.014787 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.014796 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.014806 | orchestrator | 2026-04-04 00:31:38.014815 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-04 00:31:38.014825 | orchestrator | Saturday 04 April 2026 00:31:21 +0000 (0:00:01.098) 0:07:01.976 ******** 2026-04-04 00:31:38.014835 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-04 00:31:38.014869 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-04 00:31:38.014880 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-04 00:31:38.014889 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-04 00:31:38.014899 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-04 00:31:38.014909 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-04 00:31:38.014918 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-04 00:31:38.014928 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-04 00:31:38.014938 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-04 00:31:38.014948 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-04 00:31:38.014957 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-04 00:31:38.014967 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-04 00:31:38.014976 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-04 00:31:38.014986 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-04 00:31:38.014996 | orchestrator | 2026-04-04 00:31:38.015005 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-04 00:31:38.015016 | orchestrator | Saturday 04 April 2026 00:31:23 +0000 (0:00:02.637) 0:07:04.613 ******** 2026-04-04 00:31:38.015033 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.015051 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.015067 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.015143 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.015155 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.015164 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.015174 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.015184 | orchestrator | 2026-04-04 00:31:38.015194 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-04 00:31:38.015204 | orchestrator | Saturday 04 April 2026 00:31:24 +0000 (0:00:00.485) 0:07:05.099 ******** 2026-04-04 00:31:38.015216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:31:38.015228 | orchestrator | 2026-04-04 00:31:38.015238 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-04 00:31:38.015248 | orchestrator | Saturday 04 April 2026 00:31:25 +0000 (0:00:00.981) 0:07:06.081 ******** 2026-04-04 00:31:38.015257 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.015267 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.015277 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.015286 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.015296 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.015306 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.015315 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.015325 | orchestrator | 2026-04-04 00:31:38.015335 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-04 00:31:38.015345 | orchestrator | Saturday 04 April 2026 00:31:26 +0000 (0:00:00.908) 0:07:06.990 ******** 2026-04-04 00:31:38.015354 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.015364 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.015374 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.015384 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.015394 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.015403 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.015413 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.015423 | orchestrator | 2026-04-04 00:31:38.015432 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-04 00:31:38.015442 | orchestrator | Saturday 04 April 2026 00:31:27 +0000 (0:00:00.843) 0:07:07.833 ******** 2026-04-04 00:31:38.015452 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.015462 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.015480 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.015490 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.015499 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.015509 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.015519 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.015587 | orchestrator | 2026-04-04 00:31:38.015600 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-04 00:31:38.015610 | orchestrator | Saturday 04 April 2026 00:31:27 +0000 (0:00:00.489) 0:07:08.323 ******** 2026-04-04 00:31:38.015620 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.015630 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:31:38.015639 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:31:38.015649 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:31:38.015659 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:31:38.015668 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:31:38.015678 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:31:38.015688 | orchestrator | 2026-04-04 00:31:38.015698 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-04 00:31:38.015708 | orchestrator | Saturday 04 April 2026 00:31:29 +0000 (0:00:01.747) 0:07:10.070 ******** 2026-04-04 00:31:38.015717 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:31:38.015727 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:31:38.015737 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:31:38.015747 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:31:38.015765 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:31:38.015775 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:31:38.015785 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:31:38.015794 | orchestrator | 2026-04-04 00:31:38.015804 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-04 00:31:38.015814 | orchestrator | Saturday 04 April 2026 00:31:30 +0000 (0:00:00.635) 0:07:10.706 ******** 2026-04-04 00:31:38.015824 | orchestrator | ok: [testbed-manager] 2026-04-04 00:31:38.015834 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:31:38.015844 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:31:38.015854 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:31:38.015863 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:31:38.015873 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:31:38.015895 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:11.301013 | orchestrator | 2026-04-04 00:32:11.301104 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-04 00:32:11.301116 | orchestrator | Saturday 04 April 2026 00:31:38 +0000 (0:00:07.990) 0:07:18.696 ******** 2026-04-04 00:32:11.301124 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301132 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:11.301140 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:11.301147 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:11.301154 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:11.301160 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:11.301167 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:11.301174 | orchestrator | 2026-04-04 00:32:11.301181 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-04 00:32:11.301188 | orchestrator | Saturday 04 April 2026 00:31:39 +0000 (0:00:01.404) 0:07:20.101 ******** 2026-04-04 00:32:11.301195 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301202 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:11.301208 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:11.301215 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:11.301222 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:11.301229 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:11.301236 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:11.301242 | orchestrator | 2026-04-04 00:32:11.301249 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-04 00:32:11.301256 | orchestrator | Saturday 04 April 2026 00:31:41 +0000 (0:00:01.851) 0:07:21.952 ******** 2026-04-04 00:32:11.301263 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301270 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:11.301277 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:11.301284 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:11.301290 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:11.301297 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:11.301304 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:11.301310 | orchestrator | 2026-04-04 00:32:11.301317 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:32:11.301324 | orchestrator | Saturday 04 April 2026 00:31:43 +0000 (0:00:01.867) 0:07:23.819 ******** 2026-04-04 00:32:11.301331 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301338 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.301345 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.301351 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.301358 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.301365 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.301371 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.301378 | orchestrator | 2026-04-04 00:32:11.301385 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:32:11.301392 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.867) 0:07:24.687 ******** 2026-04-04 00:32:11.301399 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:11.301406 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:11.301433 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:11.301441 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:11.301447 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:11.301454 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:11.301461 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:11.301467 | orchestrator | 2026-04-04 00:32:11.301533 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-04 00:32:11.301540 | orchestrator | Saturday 04 April 2026 00:31:44 +0000 (0:00:00.800) 0:07:25.488 ******** 2026-04-04 00:32:11.301547 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:11.301554 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:11.301561 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:11.301568 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:11.301577 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:11.301585 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:11.301592 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:11.301599 | orchestrator | 2026-04-04 00:32:11.301608 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-04 00:32:11.301616 | orchestrator | Saturday 04 April 2026 00:31:45 +0000 (0:00:00.663) 0:07:26.151 ******** 2026-04-04 00:32:11.301624 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301632 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.301640 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.301648 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.301655 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.301663 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.301670 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.301678 | orchestrator | 2026-04-04 00:32:11.301686 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-04 00:32:11.301694 | orchestrator | Saturday 04 April 2026 00:31:46 +0000 (0:00:00.503) 0:07:26.655 ******** 2026-04-04 00:32:11.301702 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301710 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.301718 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.301725 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.301733 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.301741 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.301748 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.301756 | orchestrator | 2026-04-04 00:32:11.301764 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-04 00:32:11.301772 | orchestrator | Saturday 04 April 2026 00:31:46 +0000 (0:00:00.542) 0:07:27.197 ******** 2026-04-04 00:32:11.301781 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301792 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.301803 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.301814 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.301825 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.301837 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.301849 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.301860 | orchestrator | 2026-04-04 00:32:11.301872 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-04 00:32:11.301884 | orchestrator | Saturday 04 April 2026 00:31:47 +0000 (0:00:00.550) 0:07:27.747 ******** 2026-04-04 00:32:11.301897 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.301904 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.301911 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.301917 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.301924 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.301930 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.301937 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.301947 | orchestrator | 2026-04-04 00:32:11.301976 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-04 00:32:11.301987 | orchestrator | Saturday 04 April 2026 00:31:52 +0000 (0:00:05.289) 0:07:33.037 ******** 2026-04-04 00:32:11.301998 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:11.302073 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:11.302083 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:11.302090 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:11.302113 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:11.302120 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:11.302126 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:11.302133 | orchestrator | 2026-04-04 00:32:11.302140 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-04 00:32:11.302147 | orchestrator | Saturday 04 April 2026 00:31:53 +0000 (0:00:00.667) 0:07:33.705 ******** 2026-04-04 00:32:11.302156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:11.302164 | orchestrator | 2026-04-04 00:32:11.302171 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-04 00:32:11.302178 | orchestrator | Saturday 04 April 2026 00:31:53 +0000 (0:00:00.774) 0:07:34.479 ******** 2026-04-04 00:32:11.302184 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.302191 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.302197 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.302204 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.302210 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.302217 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.302223 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.302230 | orchestrator | 2026-04-04 00:32:11.302236 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-04 00:32:11.302243 | orchestrator | Saturday 04 April 2026 00:31:55 +0000 (0:00:02.092) 0:07:36.572 ******** 2026-04-04 00:32:11.302250 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.302256 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.302263 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.302269 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.302276 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.302282 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.302289 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.302295 | orchestrator | 2026-04-04 00:32:11.302302 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-04 00:32:11.302309 | orchestrator | Saturday 04 April 2026 00:31:57 +0000 (0:00:01.240) 0:07:37.812 ******** 2026-04-04 00:32:11.302315 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:11.302322 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:11.302332 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:11.302343 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:11.302354 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:11.302365 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:11.302376 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:11.302388 | orchestrator | 2026-04-04 00:32:11.302399 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-04 00:32:11.302410 | orchestrator | Saturday 04 April 2026 00:31:58 +0000 (0:00:01.391) 0:07:39.204 ******** 2026-04-04 00:32:11.302419 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302428 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302434 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302445 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302452 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302465 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302499 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-04 00:32:11.302506 | orchestrator | 2026-04-04 00:32:11.302513 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-04 00:32:11.302520 | orchestrator | Saturday 04 April 2026 00:32:00 +0000 (0:00:01.811) 0:07:41.015 ******** 2026-04-04 00:32:11.302527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:11.302534 | orchestrator | 2026-04-04 00:32:11.302541 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-04 00:32:11.302547 | orchestrator | Saturday 04 April 2026 00:32:01 +0000 (0:00:00.917) 0:07:41.932 ******** 2026-04-04 00:32:11.302554 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:11.302561 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:11.302567 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:11.302574 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:11.302581 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:11.302587 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:11.302594 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:11.302600 | orchestrator | 2026-04-04 00:32:11.302615 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-04 00:32:42.434375 | orchestrator | Saturday 04 April 2026 00:32:11 +0000 (0:00:09.991) 0:07:51.923 ******** 2026-04-04 00:32:42.434501 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:42.434510 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:42.434515 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:42.434519 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:42.434524 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:42.434528 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:42.434532 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:42.434536 | orchestrator | 2026-04-04 00:32:42.434542 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-04 00:32:42.434547 | orchestrator | Saturday 04 April 2026 00:32:13 +0000 (0:00:01.726) 0:07:53.650 ******** 2026-04-04 00:32:42.434551 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:42.434556 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:42.434560 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:42.434564 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:42.434569 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:42.434573 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:42.434577 | orchestrator | 2026-04-04 00:32:42.434582 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-04 00:32:42.434586 | orchestrator | Saturday 04 April 2026 00:32:14 +0000 (0:00:01.586) 0:07:55.236 ******** 2026-04-04 00:32:42.434590 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.434596 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.434600 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.434604 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.434609 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.434613 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.434617 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.434621 | orchestrator | 2026-04-04 00:32:42.434625 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-04 00:32:42.434629 | orchestrator | 2026-04-04 00:32:42.434634 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-04 00:32:42.434638 | orchestrator | Saturday 04 April 2026 00:32:15 +0000 (0:00:01.295) 0:07:56.532 ******** 2026-04-04 00:32:42.434642 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:42.434662 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:42.434667 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:42.434671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:42.434675 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:42.434679 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:42.434683 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:42.434687 | orchestrator | 2026-04-04 00:32:42.434691 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-04 00:32:42.434695 | orchestrator | 2026-04-04 00:32:42.434699 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-04 00:32:42.434705 | orchestrator | Saturday 04 April 2026 00:32:16 +0000 (0:00:00.539) 0:07:57.072 ******** 2026-04-04 00:32:42.434712 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.434718 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.434725 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.434732 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.434739 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.434745 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.434751 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.434758 | orchestrator | 2026-04-04 00:32:42.434765 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-04 00:32:42.434771 | orchestrator | Saturday 04 April 2026 00:32:17 +0000 (0:00:01.422) 0:07:58.494 ******** 2026-04-04 00:32:42.434778 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:42.434785 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:42.434792 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:42.434800 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:42.434804 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:42.434808 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:42.434812 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:42.434816 | orchestrator | 2026-04-04 00:32:42.434821 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-04 00:32:42.434825 | orchestrator | Saturday 04 April 2026 00:32:19 +0000 (0:00:01.649) 0:08:00.144 ******** 2026-04-04 00:32:42.434829 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:32:42.434843 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:32:42.434848 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:32:42.434852 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:32:42.434856 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:32:42.434860 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:32:42.434875 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:32:42.434879 | orchestrator | 2026-04-04 00:32:42.434889 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-04 00:32:42.434893 | orchestrator | Saturday 04 April 2026 00:32:19 +0000 (0:00:00.452) 0:08:00.596 ******** 2026-04-04 00:32:42.434898 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:42.434904 | orchestrator | 2026-04-04 00:32:42.434908 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-04 00:32:42.434913 | orchestrator | Saturday 04 April 2026 00:32:20 +0000 (0:00:00.787) 0:08:01.384 ******** 2026-04-04 00:32:42.434921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:42.434928 | orchestrator | 2026-04-04 00:32:42.434933 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-04 00:32:42.434938 | orchestrator | Saturday 04 April 2026 00:32:21 +0000 (0:00:00.920) 0:08:02.305 ******** 2026-04-04 00:32:42.434942 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.434947 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.434952 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.434957 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.434967 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.434972 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.434977 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.434982 | orchestrator | 2026-04-04 00:32:42.434998 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-04 00:32:42.435003 | orchestrator | Saturday 04 April 2026 00:32:31 +0000 (0:00:09.398) 0:08:11.703 ******** 2026-04-04 00:32:42.435008 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435013 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435018 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435022 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435027 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435032 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435037 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435042 | orchestrator | 2026-04-04 00:32:42.435046 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-04 00:32:42.435051 | orchestrator | Saturday 04 April 2026 00:32:31 +0000 (0:00:00.855) 0:08:12.558 ******** 2026-04-04 00:32:42.435056 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435061 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435066 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435071 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435076 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435080 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435085 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435090 | orchestrator | 2026-04-04 00:32:42.435095 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-04 00:32:42.435100 | orchestrator | Saturday 04 April 2026 00:32:33 +0000 (0:00:01.375) 0:08:13.934 ******** 2026-04-04 00:32:42.435105 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435109 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435114 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435119 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435123 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435128 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435133 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435138 | orchestrator | 2026-04-04 00:32:42.435142 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-04 00:32:42.435147 | orchestrator | Saturday 04 April 2026 00:32:35 +0000 (0:00:01.939) 0:08:15.874 ******** 2026-04-04 00:32:42.435152 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435157 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435162 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435167 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435171 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435176 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435181 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435186 | orchestrator | 2026-04-04 00:32:42.435190 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-04 00:32:42.435195 | orchestrator | Saturday 04 April 2026 00:32:36 +0000 (0:00:01.248) 0:08:17.123 ******** 2026-04-04 00:32:42.435200 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435205 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435210 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435214 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435219 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435223 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435228 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435233 | orchestrator | 2026-04-04 00:32:42.435238 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-04 00:32:42.435243 | orchestrator | 2026-04-04 00:32:42.435259 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-04 00:32:42.435265 | orchestrator | Saturday 04 April 2026 00:32:37 +0000 (0:00:01.200) 0:08:18.323 ******** 2026-04-04 00:32:42.435323 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:42.435328 | orchestrator | 2026-04-04 00:32:42.435341 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-04 00:32:42.435346 | orchestrator | Saturday 04 April 2026 00:32:38 +0000 (0:00:00.933) 0:08:19.257 ******** 2026-04-04 00:32:42.435350 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:42.435358 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:42.435362 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:42.435367 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:42.435371 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:42.435375 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:42.435379 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:42.435383 | orchestrator | 2026-04-04 00:32:42.435387 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-04 00:32:42.435391 | orchestrator | Saturday 04 April 2026 00:32:39 +0000 (0:00:00.870) 0:08:20.128 ******** 2026-04-04 00:32:42.435395 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:42.435399 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:42.435404 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:42.435408 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:42.435412 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:42.435416 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:42.435420 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:42.435424 | orchestrator | 2026-04-04 00:32:42.435428 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-04 00:32:42.435433 | orchestrator | Saturday 04 April 2026 00:32:40 +0000 (0:00:01.250) 0:08:21.379 ******** 2026-04-04 00:32:42.435453 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:32:42.435457 | orchestrator | 2026-04-04 00:32:42.435462 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-04 00:32:42.435466 | orchestrator | Saturday 04 April 2026 00:32:41 +0000 (0:00:00.806) 0:08:22.186 ******** 2026-04-04 00:32:42.435470 | orchestrator | ok: [testbed-manager] 2026-04-04 00:32:42.435474 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:32:42.435478 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:32:42.435482 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:32:42.435486 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:32:42.435490 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:32:42.435494 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:32:42.435498 | orchestrator | 2026-04-04 00:32:42.435506 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-04 00:32:43.950601 | orchestrator | Saturday 04 April 2026 00:32:42 +0000 (0:00:00.871) 0:08:23.057 ******** 2026-04-04 00:32:43.950675 | orchestrator | changed: [testbed-manager] 2026-04-04 00:32:43.950683 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:32:43.950687 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:32:43.950691 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:32:43.950695 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:32:43.950699 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:32:43.950703 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:32:43.950708 | orchestrator | 2026-04-04 00:32:43.950713 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:32:43.950718 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-04 00:32:43.950724 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:32:43.950728 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-04 00:32:43.950750 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-04 00:32:43.950754 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:32:43.950758 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:32:43.950762 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-04 00:32:43.950765 | orchestrator | 2026-04-04 00:32:43.950769 | orchestrator | 2026-04-04 00:32:43.950773 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:32:43.950778 | orchestrator | Saturday 04 April 2026 00:32:43 +0000 (0:00:01.206) 0:08:24.263 ******** 2026-04-04 00:32:43.950781 | orchestrator | =============================================================================== 2026-04-04 00:32:43.950785 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.38s 2026-04-04 00:32:43.950789 | orchestrator | osism.commons.packages : Download required packages -------------------- 56.02s 2026-04-04 00:32:43.950793 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.35s 2026-04-04 00:32:43.950796 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-04-04 00:32:43.950800 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.17s 2026-04-04 00:32:43.950804 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.16s 2026-04-04 00:32:43.950808 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.12s 2026-04-04 00:32:43.950812 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.62s 2026-04-04 00:32:43.950816 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.25s 2026-04-04 00:32:43.950820 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.99s 2026-04-04 00:32:43.950823 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.40s 2026-04-04 00:32:43.950837 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.46s 2026-04-04 00:32:43.950842 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.01s 2026-04-04 00:32:43.950846 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.99s 2026-04-04 00:32:43.950849 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.96s 2026-04-04 00:32:43.950853 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.01s 2026-04-04 00:32:43.950857 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.84s 2026-04-04 00:32:43.950861 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.03s 2026-04-04 00:32:43.950864 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.45s 2026-04-04 00:32:43.950868 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.29s 2026-04-04 00:32:44.137578 | orchestrator | + osism apply fail2ban 2026-04-04 00:32:55.768461 | orchestrator | 2026-04-04 00:32:55 | INFO  | Prepare task for execution of fail2ban. 2026-04-04 00:32:55.853811 | orchestrator | 2026-04-04 00:32:55 | INFO  | Task 71e5a5ef-a41f-488b-b427-df2a07f8fb40 (fail2ban) was prepared for execution. 2026-04-04 00:32:55.853898 | orchestrator | 2026-04-04 00:32:55 | INFO  | It takes a moment until task 71e5a5ef-a41f-488b-b427-df2a07f8fb40 (fail2ban) has been started and output is visible here. 2026-04-04 00:33:17.205819 | orchestrator | 2026-04-04 00:33:17.205897 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-04 00:33:17.205927 | orchestrator | 2026-04-04 00:33:17.205935 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-04 00:33:17.205941 | orchestrator | Saturday 04 April 2026 00:32:59 +0000 (0:00:00.371) 0:00:00.371 ******** 2026-04-04 00:33:17.205949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:17.205957 | orchestrator | 2026-04-04 00:33:17.205963 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-04 00:33:17.205969 | orchestrator | Saturday 04 April 2026 00:33:00 +0000 (0:00:01.129) 0:00:01.500 ******** 2026-04-04 00:33:17.205975 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:17.205982 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:17.205988 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:17.205994 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:17.205999 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:17.206005 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:17.206011 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:17.206052 | orchestrator | 2026-04-04 00:33:17.206058 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-04 00:33:17.206064 | orchestrator | Saturday 04 April 2026 00:33:12 +0000 (0:00:11.703) 0:00:13.203 ******** 2026-04-04 00:33:17.206070 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:17.206076 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:17.206082 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:17.206087 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:17.206093 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:17.206099 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:17.206105 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:17.206110 | orchestrator | 2026-04-04 00:33:17.206116 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-04 00:33:17.206123 | orchestrator | Saturday 04 April 2026 00:33:13 +0000 (0:00:01.660) 0:00:14.864 ******** 2026-04-04 00:33:17.206128 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:17.206135 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:17.206141 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:17.206147 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:17.206153 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:17.206159 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:17.206164 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:17.206170 | orchestrator | 2026-04-04 00:33:17.206196 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-04 00:33:17.206203 | orchestrator | Saturday 04 April 2026 00:33:15 +0000 (0:00:01.288) 0:00:16.153 ******** 2026-04-04 00:33:17.206209 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:17.206215 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:17.206221 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:17.206227 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:17.206233 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:17.206239 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:17.206244 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:17.206250 | orchestrator | 2026-04-04 00:33:17.206256 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:33:17.206263 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206269 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206275 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206282 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206305 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206311 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206317 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:33:17.206323 | orchestrator | 2026-04-04 00:33:17.206329 | orchestrator | 2026-04-04 00:33:17.206335 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:33:17.206341 | orchestrator | Saturday 04 April 2026 00:33:16 +0000 (0:00:01.703) 0:00:17.856 ******** 2026-04-04 00:33:17.206347 | orchestrator | =============================================================================== 2026-04-04 00:33:17.206352 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.70s 2026-04-04 00:33:17.206358 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.70s 2026-04-04 00:33:17.206380 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.66s 2026-04-04 00:33:17.206387 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.29s 2026-04-04 00:33:17.206393 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.13s 2026-04-04 00:33:17.372422 | orchestrator | + osism apply network 2026-04-04 00:33:28.650468 | orchestrator | 2026-04-04 00:33:28 | INFO  | Prepare task for execution of network. 2026-04-04 00:33:28.721467 | orchestrator | 2026-04-04 00:33:28 | INFO  | Task e3d7a6ba-a461-49ca-b6e6-c94f9f342b9f (network) was prepared for execution. 2026-04-04 00:33:28.721576 | orchestrator | 2026-04-04 00:33:28 | INFO  | It takes a moment until task e3d7a6ba-a461-49ca-b6e6-c94f9f342b9f (network) has been started and output is visible here. 2026-04-04 00:33:57.230970 | orchestrator | 2026-04-04 00:33:57.231085 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-04 00:33:57.231109 | orchestrator | 2026-04-04 00:33:57.231128 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-04 00:33:57.231146 | orchestrator | Saturday 04 April 2026 00:33:32 +0000 (0:00:00.334) 0:00:00.334 ******** 2026-04-04 00:33:57.231163 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.231181 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.231197 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.231213 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.231229 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.231246 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.231262 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.231277 | orchestrator | 2026-04-04 00:33:57.231293 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-04 00:33:57.231311 | orchestrator | Saturday 04 April 2026 00:33:32 +0000 (0:00:00.597) 0:00:00.932 ******** 2026-04-04 00:33:57.231358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:57.231378 | orchestrator | 2026-04-04 00:33:57.231394 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-04 00:33:57.231411 | orchestrator | Saturday 04 April 2026 00:33:33 +0000 (0:00:01.169) 0:00:02.101 ******** 2026-04-04 00:33:57.231427 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.231444 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.231459 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.231475 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.231492 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.231508 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.231559 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.231577 | orchestrator | 2026-04-04 00:33:57.231598 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-04 00:33:57.231616 | orchestrator | Saturday 04 April 2026 00:33:36 +0000 (0:00:02.735) 0:00:04.837 ******** 2026-04-04 00:33:57.231639 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.231659 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.231679 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.231698 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.231719 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.231737 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.231755 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.231773 | orchestrator | 2026-04-04 00:33:57.231791 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-04 00:33:57.231808 | orchestrator | Saturday 04 April 2026 00:33:38 +0000 (0:00:01.740) 0:00:06.578 ******** 2026-04-04 00:33:57.231827 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-04 00:33:57.231844 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-04 00:33:57.231860 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-04 00:33:57.231876 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-04 00:33:57.231894 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-04 00:33:57.231912 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-04 00:33:57.231929 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-04 00:33:57.231946 | orchestrator | 2026-04-04 00:33:57.231963 | orchestrator | TASK [osism.commons.network : Write network_netplan_config_template to temporary file] *** 2026-04-04 00:33:57.231982 | orchestrator | Saturday 04 April 2026 00:33:39 +0000 (0:00:01.291) 0:00:07.869 ******** 2026-04-04 00:33:57.231999 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:57.232018 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:57.232035 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:57.232052 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:57.232068 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:57.232085 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:57.232103 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:57.232119 | orchestrator | 2026-04-04 00:33:57.232139 | orchestrator | TASK [osism.commons.network : Render netplan configuration from network_netplan_config_template variable] *** 2026-04-04 00:33:57.232167 | orchestrator | Saturday 04 April 2026 00:33:40 +0000 (0:00:00.611) 0:00:08.480 ******** 2026-04-04 00:33:57.232194 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:57.232220 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:57.232245 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:57.232270 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:57.232289 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:57.232304 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:57.232442 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:57.232477 | orchestrator | 2026-04-04 00:33:57.232495 | orchestrator | TASK [osism.commons.network : Remove temporary network_netplan_config_template file] *** 2026-04-04 00:33:57.232512 | orchestrator | Saturday 04 April 2026 00:33:40 +0000 (0:00:00.748) 0:00:09.228 ******** 2026-04-04 00:33:57.232531 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:57.232548 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:57.232565 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:57.232582 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:57.232599 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:57.232616 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:57.232632 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:57.232650 | orchestrator | 2026-04-04 00:33:57.232690 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-04 00:33:57.232708 | orchestrator | Saturday 04 April 2026 00:33:41 +0000 (0:00:00.766) 0:00:09.995 ******** 2026-04-04 00:33:57.232726 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:33:57.232819 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 00:33:57.232841 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 00:33:57.232859 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:33:57.232876 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 00:33:57.232895 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 00:33:57.232911 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 00:33:57.232927 | orchestrator | 2026-04-04 00:33:57.232975 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-04 00:33:57.232995 | orchestrator | Saturday 04 April 2026 00:33:44 +0000 (0:00:03.214) 0:00:13.209 ******** 2026-04-04 00:33:57.233011 | orchestrator | changed: [testbed-manager] 2026-04-04 00:33:57.233027 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:57.233044 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:57.233060 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:57.233076 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:57.233092 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:57.233108 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:57.233123 | orchestrator | 2026-04-04 00:33:57.233133 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-04 00:33:57.233143 | orchestrator | Saturday 04 April 2026 00:33:46 +0000 (0:00:01.435) 0:00:14.644 ******** 2026-04-04 00:33:57.233153 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:33:57.233163 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:33:57.233173 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 00:33:57.233182 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 00:33:57.233192 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 00:33:57.233201 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 00:33:57.233211 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 00:33:57.233220 | orchestrator | 2026-04-04 00:33:57.233230 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-04 00:33:57.233240 | orchestrator | Saturday 04 April 2026 00:33:47 +0000 (0:00:01.562) 0:00:16.207 ******** 2026-04-04 00:33:57.233250 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.233260 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.233270 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.233279 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.233289 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.233298 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.233308 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.233343 | orchestrator | 2026-04-04 00:33:57.233354 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-04 00:33:57.233364 | orchestrator | Saturday 04 April 2026 00:33:48 +0000 (0:00:01.082) 0:00:17.290 ******** 2026-04-04 00:33:57.233373 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:33:57.233383 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:57.233393 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:57.233402 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:57.233411 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:57.233421 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:57.233430 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:57.233440 | orchestrator | 2026-04-04 00:33:57.233450 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-04 00:33:57.233459 | orchestrator | Saturday 04 April 2026 00:33:49 +0000 (0:00:00.554) 0:00:17.845 ******** 2026-04-04 00:33:57.233469 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.233479 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.233488 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.233498 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.233507 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.233517 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.233526 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.233536 | orchestrator | 2026-04-04 00:33:57.233557 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-04 00:33:57.233566 | orchestrator | Saturday 04 April 2026 00:33:51 +0000 (0:00:02.236) 0:00:20.082 ******** 2026-04-04 00:33:57.233576 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:33:57.233586 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:33:57.233596 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:33:57.233605 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:33:57.233615 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:33:57.233624 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:33:57.233634 | orchestrator | changed: [testbed-manager] => (item={'src': '/opt/configuration/network/iptables.sh', 'dest': 'routable.d/iptables.sh'}) 2026-04-04 00:33:57.233645 | orchestrator | 2026-04-04 00:33:57.233655 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-04 00:33:57.233674 | orchestrator | Saturday 04 April 2026 00:33:52 +0000 (0:00:00.870) 0:00:20.952 ******** 2026-04-04 00:33:57.233683 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.233693 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:33:57.233703 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:33:57.233712 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:33:57.233722 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:33:57.233731 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:33:57.233740 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:33:57.233750 | orchestrator | 2026-04-04 00:33:57.233759 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-04 00:33:57.233769 | orchestrator | Saturday 04 April 2026 00:33:54 +0000 (0:00:01.719) 0:00:22.672 ******** 2026-04-04 00:33:57.233780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:33:57.233793 | orchestrator | 2026-04-04 00:33:57.233802 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-04 00:33:57.233812 | orchestrator | Saturday 04 April 2026 00:33:55 +0000 (0:00:01.202) 0:00:23.874 ******** 2026-04-04 00:33:57.233821 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.233831 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.233840 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.233850 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.233859 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.233868 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:33:57.233878 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:33:57.233887 | orchestrator | 2026-04-04 00:33:57.233897 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-04 00:33:57.233907 | orchestrator | Saturday 04 April 2026 00:33:56 +0000 (0:00:01.126) 0:00:25.000 ******** 2026-04-04 00:33:57.233916 | orchestrator | ok: [testbed-manager] 2026-04-04 00:33:57.233926 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:33:57.233936 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:33:57.233945 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:33:57.233955 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:33:57.233974 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:13.127394 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:13.127512 | orchestrator | 2026-04-04 00:34:13.127528 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-04 00:34:13.127541 | orchestrator | Saturday 04 April 2026 00:33:57 +0000 (0:00:00.647) 0:00:25.648 ******** 2026-04-04 00:34:13.127552 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127562 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127580 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127605 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127625 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127674 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127691 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127706 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127725 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127742 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127759 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127778 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-04 00:34:13.127795 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127814 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-04 00:34:13.127833 | orchestrator | 2026-04-04 00:34:13.127846 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-04 00:34:13.127858 | orchestrator | Saturday 04 April 2026 00:33:58 +0000 (0:00:01.251) 0:00:26.899 ******** 2026-04-04 00:34:13.127870 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:13.127883 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:13.127894 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:13.127905 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:13.127917 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:13.127928 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:13.127940 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:13.127951 | orchestrator | 2026-04-04 00:34:13.127962 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-04 00:34:13.127973 | orchestrator | Saturday 04 April 2026 00:33:59 +0000 (0:00:00.614) 0:00:27.513 ******** 2026-04-04 00:34:13.127987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2026-04-04 00:34:13.128001 | orchestrator | 2026-04-04 00:34:13.128012 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-04 00:34:13.128023 | orchestrator | Saturday 04 April 2026 00:34:03 +0000 (0:00:04.271) 0:00:31.785 ******** 2026-04-04 00:34:13.128037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128064 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-04 00:34:13.128077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128115 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-04 00:34:13.128160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-04 00:34:13.128173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-04 00:34:13.128209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-04 00:34:13.128219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-04 00:34:13.128229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-04 00:34:13.128239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-04 00:34:13.128248 | orchestrator | 2026-04-04 00:34:13.128258 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-04 00:34:13.128268 | orchestrator | Saturday 04 April 2026 00:34:08 +0000 (0:00:05.119) 0:00:36.905 ******** 2026-04-04 00:34:13.128278 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.112.5/20']}}) 2026-04-04 00:34:13.128321 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.5', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'addresses': ['192.168.128.5/20']}}) 2026-04-04 00:34:13.128336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128356 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:13.128402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'vni': 42, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': []}}) 2026-04-04 00:34:23.315868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.11', 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.11/20']}}) 2026-04-04 00:34:23.316801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.13', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.13/20']}}) 2026-04-04 00:34:23.316835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.15', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'addresses': ['192.168.128.15/20']}}) 2026-04-04 00:34:23.316847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.10', 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.10/20']}}) 2026-04-04 00:34:23.316858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.12', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.12/20']}}) 2026-04-04 00:34:23.316868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'vni': 23, 'mtu': 1350, 'local_ip': '192.168.16.14', 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'addresses': ['192.168.128.14/20']}}) 2026-04-04 00:34:23.316879 | orchestrator | 2026-04-04 00:34:23.316893 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-04 00:34:23.316904 | orchestrator | Saturday 04 April 2026 00:34:14 +0000 (0:00:05.507) 0:00:42.413 ******** 2026-04-04 00:34:23.316916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:23.316927 | orchestrator | 2026-04-04 00:34:23.316937 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-04 00:34:23.316947 | orchestrator | Saturday 04 April 2026 00:34:15 +0000 (0:00:01.097) 0:00:43.510 ******** 2026-04-04 00:34:23.316957 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:23.316968 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:23.316978 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:23.316988 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:23.316998 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:23.317007 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:23.317017 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:23.317095 | orchestrator | 2026-04-04 00:34:23.317164 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-04 00:34:23.317176 | orchestrator | Saturday 04 April 2026 00:34:16 +0000 (0:00:00.965) 0:00:44.475 ******** 2026-04-04 00:34:23.317186 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317197 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317207 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317217 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317227 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317236 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317246 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317256 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317266 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:23.317328 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317338 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317348 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317358 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317367 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:23.317377 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317387 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317397 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317426 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317436 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:23.317446 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317456 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317465 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317475 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317485 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:23.317495 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317504 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317514 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317524 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:23.317533 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317543 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:23.317553 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-04 00:34:23.317563 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-04 00:34:23.317573 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-04 00:34:23.317582 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-04 00:34:23.317592 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:23.317602 | orchestrator | 2026-04-04 00:34:23.317612 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-04-04 00:34:23.317631 | orchestrator | Saturday 04 April 2026 00:34:16 +0000 (0:00:00.787) 0:00:45.263 ******** 2026-04-04 00:34:23.317642 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:34:23.317652 | orchestrator | 2026-04-04 00:34:23.317662 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-04-04 00:34:23.317671 | orchestrator | Saturday 04 April 2026 00:34:18 +0000 (0:00:01.097) 0:00:46.361 ******** 2026-04-04 00:34:23.317681 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:23.317691 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:23.317701 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:23.317711 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:23.317720 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:23.317730 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:23.317740 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:23.317749 | orchestrator | 2026-04-04 00:34:23.317759 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-04-04 00:34:23.317769 | orchestrator | Saturday 04 April 2026 00:34:18 +0000 (0:00:00.539) 0:00:46.900 ******** 2026-04-04 00:34:23.317779 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:23.317789 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:23.317798 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:23.317808 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:23.317818 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:23.317828 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:23.317837 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:23.317847 | orchestrator | 2026-04-04 00:34:23.317862 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-04-04 00:34:23.317872 | orchestrator | Saturday 04 April 2026 00:34:19 +0000 (0:00:00.645) 0:00:47.546 ******** 2026-04-04 00:34:23.317882 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:23.317891 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:23.317901 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:23.317911 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:23.317920 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:23.317935 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:23.317953 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:23.317970 | orchestrator | 2026-04-04 00:34:23.317987 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-04-04 00:34:23.318134 | orchestrator | Saturday 04 April 2026 00:34:19 +0000 (0:00:00.551) 0:00:48.097 ******** 2026-04-04 00:34:23.318164 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:23.318179 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:23.318189 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:23.318199 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:23.318209 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:23.318218 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:23.318228 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:23.318237 | orchestrator | 2026-04-04 00:34:23.318247 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-04-04 00:34:23.318257 | orchestrator | Saturday 04 April 2026 00:34:21 +0000 (0:00:01.589) 0:00:49.686 ******** 2026-04-04 00:34:23.318267 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:23.318299 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:23.318309 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:23.318319 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:23.318329 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:23.318338 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:23.318347 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:23.318357 | orchestrator | 2026-04-04 00:34:23.318366 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-04-04 00:34:23.318376 | orchestrator | Saturday 04 April 2026 00:34:22 +0000 (0:00:01.045) 0:00:50.732 ******** 2026-04-04 00:34:23.318398 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:23.318409 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:34:23.318419 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:34:23.318430 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:34:23.318440 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:34:23.318451 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:34:23.318475 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:34:26.126847 | orchestrator | 2026-04-04 00:34:26.126983 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-04 00:34:26.127008 | orchestrator | Saturday 04 April 2026 00:34:24 +0000 (0:00:02.003) 0:00:52.736 ******** 2026-04-04 00:34:26.127025 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:26.127043 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:26.127061 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:26.127079 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:26.127098 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:26.127117 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:26.127134 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:26.127152 | orchestrator | 2026-04-04 00:34:26.127170 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-04 00:34:26.127188 | orchestrator | Saturday 04 April 2026 00:34:25 +0000 (0:00:00.828) 0:00:53.564 ******** 2026-04-04 00:34:26.127211 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:34:26.127228 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:34:26.127245 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:34:26.127262 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:34:26.127311 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:34:26.127329 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:34:26.127346 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:34:26.127369 | orchestrator | 2026-04-04 00:34:26.127393 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:34:26.127419 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 00:34:26.127443 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127466 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127487 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127507 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127530 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127553 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:34:26.127581 | orchestrator | 2026-04-04 00:34:26.127604 | orchestrator | 2026-04-04 00:34:26.127627 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:34:26.127651 | orchestrator | Saturday 04 April 2026 00:34:25 +0000 (0:00:00.533) 0:00:54.098 ******** 2026-04-04 00:34:26.127669 | orchestrator | =============================================================================== 2026-04-04 00:34:26.127687 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.51s 2026-04-04 00:34:26.127705 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.12s 2026-04-04 00:34:26.127723 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.27s 2026-04-04 00:34:26.127784 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.21s 2026-04-04 00:34:26.127804 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.74s 2026-04-04 00:34:26.127823 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.24s 2026-04-04 00:34:26.127842 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.00s 2026-04-04 00:34:26.127860 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2026-04-04 00:34:26.127879 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2026-04-04 00:34:26.127896 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.59s 2026-04-04 00:34:26.127913 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.56s 2026-04-04 00:34:26.127930 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2026-04-04 00:34:26.127949 | orchestrator | osism.commons.network : Create required directories --------------------- 1.29s 2026-04-04 00:34:26.127967 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2026-04-04 00:34:26.127986 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2026-04-04 00:34:26.128004 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2026-04-04 00:34:26.128021 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2026-04-04 00:34:26.128037 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.10s 2026-04-04 00:34:26.128056 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-04-04 00:34:26.128074 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.08s 2026-04-04 00:34:26.301613 | orchestrator | + osism apply wireguard 2026-04-04 00:34:37.649635 | orchestrator | 2026-04-04 00:34:37 | INFO  | Prepare task for execution of wireguard. 2026-04-04 00:34:37.721705 | orchestrator | 2026-04-04 00:34:37 | INFO  | Task 3e2b001b-8c18-423c-8dc2-3d187d82790c (wireguard) was prepared for execution. 2026-04-04 00:34:37.721831 | orchestrator | 2026-04-04 00:34:37 | INFO  | It takes a moment until task 3e2b001b-8c18-423c-8dc2-3d187d82790c (wireguard) has been started and output is visible here. 2026-04-04 00:34:56.283722 | orchestrator | 2026-04-04 00:34:56.283830 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-04 00:34:56.283848 | orchestrator | 2026-04-04 00:34:56.283861 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-04 00:34:56.283874 | orchestrator | Saturday 04 April 2026 00:34:40 +0000 (0:00:00.285) 0:00:00.285 ******** 2026-04-04 00:34:56.283886 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:56.283898 | orchestrator | 2026-04-04 00:34:56.283910 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-04 00:34:56.283921 | orchestrator | Saturday 04 April 2026 00:34:42 +0000 (0:00:01.785) 0:00:02.070 ******** 2026-04-04 00:34:56.283933 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.283944 | orchestrator | 2026-04-04 00:34:56.283955 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-04 00:34:56.283966 | orchestrator | Saturday 04 April 2026 00:34:48 +0000 (0:00:06.158) 0:00:08.229 ******** 2026-04-04 00:34:56.283977 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284019 | orchestrator | 2026-04-04 00:34:56.284032 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-04 00:34:56.284043 | orchestrator | Saturday 04 April 2026 00:34:49 +0000 (0:00:00.526) 0:00:08.756 ******** 2026-04-04 00:34:56.284054 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284065 | orchestrator | 2026-04-04 00:34:56.284076 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-04 00:34:56.284087 | orchestrator | Saturday 04 April 2026 00:34:49 +0000 (0:00:00.411) 0:00:09.167 ******** 2026-04-04 00:34:56.284125 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:56.284137 | orchestrator | 2026-04-04 00:34:56.284148 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-04 00:34:56.284159 | orchestrator | Saturday 04 April 2026 00:34:50 +0000 (0:00:00.534) 0:00:09.701 ******** 2026-04-04 00:34:56.284187 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:56.284199 | orchestrator | 2026-04-04 00:34:56.284210 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-04 00:34:56.284221 | orchestrator | Saturday 04 April 2026 00:34:50 +0000 (0:00:00.387) 0:00:10.089 ******** 2026-04-04 00:34:56.284232 | orchestrator | ok: [testbed-manager] 2026-04-04 00:34:56.284269 | orchestrator | 2026-04-04 00:34:56.284282 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-04 00:34:56.284294 | orchestrator | Saturday 04 April 2026 00:34:51 +0000 (0:00:00.411) 0:00:10.500 ******** 2026-04-04 00:34:56.284307 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284319 | orchestrator | 2026-04-04 00:34:56.284332 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-04 00:34:56.284344 | orchestrator | Saturday 04 April 2026 00:34:52 +0000 (0:00:01.170) 0:00:11.671 ******** 2026-04-04 00:34:56.284356 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-04 00:34:56.284369 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284382 | orchestrator | 2026-04-04 00:34:56.284395 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-04 00:34:56.284407 | orchestrator | Saturday 04 April 2026 00:34:53 +0000 (0:00:00.932) 0:00:12.603 ******** 2026-04-04 00:34:56.284420 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284432 | orchestrator | 2026-04-04 00:34:56.284445 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-04 00:34:56.284464 | orchestrator | Saturday 04 April 2026 00:34:55 +0000 (0:00:01.907) 0:00:14.511 ******** 2026-04-04 00:34:56.284477 | orchestrator | changed: [testbed-manager] 2026-04-04 00:34:56.284489 | orchestrator | 2026-04-04 00:34:56.284501 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:34:56.284541 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:34:56.284556 | orchestrator | 2026-04-04 00:34:56.284568 | orchestrator | 2026-04-04 00:34:56.284582 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:34:56.284595 | orchestrator | Saturday 04 April 2026 00:34:56 +0000 (0:00:00.918) 0:00:15.430 ******** 2026-04-04 00:34:56.284607 | orchestrator | =============================================================================== 2026-04-04 00:34:56.284620 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.16s 2026-04-04 00:34:56.284631 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.91s 2026-04-04 00:34:56.284641 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.79s 2026-04-04 00:34:56.284652 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-04-04 00:34:56.284665 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-04-04 00:34:56.284684 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2026-04-04 00:34:56.284702 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-04-04 00:34:56.284719 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2026-04-04 00:34:56.284737 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-04 00:34:56.284756 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-04-04 00:34:56.284774 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-04-04 00:34:56.447878 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-04 00:34:56.486347 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-04 00:34:56.486477 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-04 00:34:56.559206 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 191 0 --:--:-- --:--:-- --:--:-- 194 2026-04-04 00:34:56.573065 | orchestrator | + osism apply --environment custom workarounds 2026-04-04 00:34:57.858553 | orchestrator | 2026-04-04 00:34:57 | INFO  | Trying to run play workarounds in environment custom 2026-04-04 00:35:07.898941 | orchestrator | 2026-04-04 00:35:07 | INFO  | Prepare task for execution of workarounds. 2026-04-04 00:35:07.975787 | orchestrator | 2026-04-04 00:35:07 | INFO  | Task 3dee91bd-81a3-4dae-aee1-fdd84a89e4f1 (workarounds) was prepared for execution. 2026-04-04 00:35:07.975879 | orchestrator | 2026-04-04 00:35:07 | INFO  | It takes a moment until task 3dee91bd-81a3-4dae-aee1-fdd84a89e4f1 (workarounds) has been started and output is visible here. 2026-04-04 00:35:32.542906 | orchestrator | 2026-04-04 00:35:32.543037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:35:32.543063 | orchestrator | 2026-04-04 00:35:32.543083 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-04 00:35:32.543102 | orchestrator | Saturday 04 April 2026 00:35:11 +0000 (0:00:00.176) 0:00:00.176 ******** 2026-04-04 00:35:32.543121 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543139 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543158 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543176 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543195 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543239 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543257 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-04 00:35:32.543275 | orchestrator | 2026-04-04 00:35:32.543295 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-04 00:35:32.543313 | orchestrator | 2026-04-04 00:35:32.543331 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-04 00:35:32.543349 | orchestrator | Saturday 04 April 2026 00:35:11 +0000 (0:00:00.680) 0:00:00.856 ******** 2026-04-04 00:35:32.543367 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:32.543386 | orchestrator | 2026-04-04 00:35:32.543405 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-04 00:35:32.543423 | orchestrator | 2026-04-04 00:35:32.543441 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-04 00:35:32.543460 | orchestrator | Saturday 04 April 2026 00:35:14 +0000 (0:00:02.470) 0:00:03.326 ******** 2026-04-04 00:35:32.543479 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:32.543495 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:32.543510 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:32.543528 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:32.543546 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:32.543564 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:32.543581 | orchestrator | 2026-04-04 00:35:32.543599 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-04 00:35:32.543617 | orchestrator | 2026-04-04 00:35:32.543635 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-04 00:35:32.543672 | orchestrator | Saturday 04 April 2026 00:35:16 +0000 (0:00:02.333) 0:00:05.660 ******** 2026-04-04 00:35:32.543691 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543710 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543759 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543778 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543796 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543814 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-04 00:35:32.543832 | orchestrator | 2026-04-04 00:35:32.543849 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-04 00:35:32.543868 | orchestrator | Saturday 04 April 2026 00:35:17 +0000 (0:00:01.387) 0:00:07.048 ******** 2026-04-04 00:35:32.543886 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:32.543904 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:32.543922 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:32.543940 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:32.543957 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:32.543975 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:32.543993 | orchestrator | 2026-04-04 00:35:32.544011 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-04 00:35:32.544029 | orchestrator | Saturday 04 April 2026 00:35:21 +0000 (0:00:03.989) 0:00:11.037 ******** 2026-04-04 00:35:32.544046 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:32.544061 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:32.544077 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:32.544092 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:32.544107 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:32.544123 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:32.544140 | orchestrator | 2026-04-04 00:35:32.544158 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-04 00:35:32.544176 | orchestrator | 2026-04-04 00:35:32.544193 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-04 00:35:32.544265 | orchestrator | Saturday 04 April 2026 00:35:22 +0000 (0:00:00.520) 0:00:11.557 ******** 2026-04-04 00:35:32.544284 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:32.544301 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:32.544319 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:32.544337 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:32.544355 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:32.544373 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:32.544390 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:32.544408 | orchestrator | 2026-04-04 00:35:32.544427 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-04 00:35:32.544444 | orchestrator | Saturday 04 April 2026 00:35:24 +0000 (0:00:01.766) 0:00:13.324 ******** 2026-04-04 00:35:32.544462 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:32.544479 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:32.544495 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:32.544511 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:32.544529 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:32.544547 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:32.544590 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:32.544609 | orchestrator | 2026-04-04 00:35:32.544626 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-04 00:35:32.544644 | orchestrator | Saturday 04 April 2026 00:35:25 +0000 (0:00:01.434) 0:00:14.758 ******** 2026-04-04 00:35:32.544662 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:32.544681 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:32.544698 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:32.544716 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:32.544734 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:32.544751 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:32.544769 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:32.544800 | orchestrator | 2026-04-04 00:35:32.544818 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-04 00:35:32.544836 | orchestrator | Saturday 04 April 2026 00:35:27 +0000 (0:00:01.813) 0:00:16.572 ******** 2026-04-04 00:35:32.544854 | orchestrator | changed: [testbed-manager] 2026-04-04 00:35:32.544872 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:32.544889 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:32.544907 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:32.544925 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:32.544944 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:32.544961 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:32.544979 | orchestrator | 2026-04-04 00:35:32.544997 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-04 00:35:32.545015 | orchestrator | Saturday 04 April 2026 00:35:29 +0000 (0:00:01.557) 0:00:18.129 ******** 2026-04-04 00:35:32.545033 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:35:32.545051 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:32.545068 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:32.545086 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:32.545104 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:32.545122 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:32.545139 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:32.545157 | orchestrator | 2026-04-04 00:35:32.545174 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-04 00:35:32.545192 | orchestrator | 2026-04-04 00:35:32.545234 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-04 00:35:32.545253 | orchestrator | Saturday 04 April 2026 00:35:29 +0000 (0:00:00.793) 0:00:18.922 ******** 2026-04-04 00:35:32.545271 | orchestrator | ok: [testbed-manager] 2026-04-04 00:35:32.545289 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:35:32.545306 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:35:32.545324 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:35:32.545339 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:35:32.545355 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:35:32.545380 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:35:32.545397 | orchestrator | 2026-04-04 00:35:32.545412 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:35:32.545430 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:35:32.545450 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545467 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545483 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545498 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545514 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545529 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:32.545545 | orchestrator | 2026-04-04 00:35:32.545561 | orchestrator | 2026-04-04 00:35:32.545577 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:35:32.545593 | orchestrator | Saturday 04 April 2026 00:35:32 +0000 (0:00:02.658) 0:00:21.581 ******** 2026-04-04 00:35:32.545608 | orchestrator | =============================================================================== 2026-04-04 00:35:32.545635 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.99s 2026-04-04 00:35:32.545648 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2026-04-04 00:35:32.545658 | orchestrator | Apply netplan configuration --------------------------------------------- 2.47s 2026-04-04 00:35:32.545668 | orchestrator | Apply netplan configuration --------------------------------------------- 2.33s 2026-04-04 00:35:32.545677 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.81s 2026-04-04 00:35:32.545687 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.77s 2026-04-04 00:35:32.545696 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.56s 2026-04-04 00:35:32.545705 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.43s 2026-04-04 00:35:32.545715 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.39s 2026-04-04 00:35:32.545724 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.79s 2026-04-04 00:35:32.545734 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.68s 2026-04-04 00:35:32.545753 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.52s 2026-04-04 00:35:32.938578 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-04 00:35:44.241977 | orchestrator | 2026-04-04 00:35:44 | INFO  | Prepare task for execution of reboot. 2026-04-04 00:35:44.315265 | orchestrator | 2026-04-04 00:35:44 | INFO  | Task 633e8745-11da-4450-af90-7a4682776f30 (reboot) was prepared for execution. 2026-04-04 00:35:44.315405 | orchestrator | 2026-04-04 00:35:44 | INFO  | It takes a moment until task 633e8745-11da-4450-af90-7a4682776f30 (reboot) has been started and output is visible here. 2026-04-04 00:35:55.805936 | orchestrator | 2026-04-04 00:35:55.806167 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.806270 | orchestrator | 2026-04-04 00:35:55.806282 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.806292 | orchestrator | Saturday 04 April 2026 00:35:47 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-04-04 00:35:55.806301 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:55.806311 | orchestrator | 2026-04-04 00:35:55.806321 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.806329 | orchestrator | Saturday 04 April 2026 00:35:47 +0000 (0:00:00.144) 0:00:00.387 ******** 2026-04-04 00:35:55.806338 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:35:55.806347 | orchestrator | 2026-04-04 00:35:55.806356 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.806365 | orchestrator | Saturday 04 April 2026 00:35:48 +0000 (0:00:01.325) 0:00:01.713 ******** 2026-04-04 00:35:55.806374 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:35:55.806382 | orchestrator | 2026-04-04 00:35:55.806391 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.806400 | orchestrator | 2026-04-04 00:35:55.806408 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.806417 | orchestrator | Saturday 04 April 2026 00:35:49 +0000 (0:00:00.109) 0:00:01.823 ******** 2026-04-04 00:35:55.806426 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:55.806434 | orchestrator | 2026-04-04 00:35:55.806443 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.806452 | orchestrator | Saturday 04 April 2026 00:35:49 +0000 (0:00:00.108) 0:00:01.931 ******** 2026-04-04 00:35:55.806463 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:35:55.806472 | orchestrator | 2026-04-04 00:35:55.806497 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.806508 | orchestrator | Saturday 04 April 2026 00:35:50 +0000 (0:00:01.033) 0:00:02.964 ******** 2026-04-04 00:35:55.806518 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:35:55.806552 | orchestrator | 2026-04-04 00:35:55.806563 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.806573 | orchestrator | 2026-04-04 00:35:55.806584 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.806594 | orchestrator | Saturday 04 April 2026 00:35:50 +0000 (0:00:00.105) 0:00:03.070 ******** 2026-04-04 00:35:55.806604 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:55.806614 | orchestrator | 2026-04-04 00:35:55.806624 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.806634 | orchestrator | Saturday 04 April 2026 00:35:50 +0000 (0:00:00.090) 0:00:03.160 ******** 2026-04-04 00:35:55.806643 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:35:55.806653 | orchestrator | 2026-04-04 00:35:55.806664 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.806673 | orchestrator | Saturday 04 April 2026 00:35:51 +0000 (0:00:01.081) 0:00:04.242 ******** 2026-04-04 00:35:55.806683 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:35:55.806694 | orchestrator | 2026-04-04 00:35:55.806704 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.806713 | orchestrator | 2026-04-04 00:35:55.806723 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.806733 | orchestrator | Saturday 04 April 2026 00:35:51 +0000 (0:00:00.115) 0:00:04.357 ******** 2026-04-04 00:35:55.806743 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:55.806753 | orchestrator | 2026-04-04 00:35:55.806764 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.806774 | orchestrator | Saturday 04 April 2026 00:35:51 +0000 (0:00:00.103) 0:00:04.461 ******** 2026-04-04 00:35:55.806784 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:35:55.806794 | orchestrator | 2026-04-04 00:35:55.806804 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.806814 | orchestrator | Saturday 04 April 2026 00:35:52 +0000 (0:00:01.015) 0:00:05.476 ******** 2026-04-04 00:35:55.806824 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:35:55.806834 | orchestrator | 2026-04-04 00:35:55.806845 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.806854 | orchestrator | 2026-04-04 00:35:55.806863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.806871 | orchestrator | Saturday 04 April 2026 00:35:52 +0000 (0:00:00.108) 0:00:05.584 ******** 2026-04-04 00:35:55.806880 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:55.806888 | orchestrator | 2026-04-04 00:35:55.806897 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.806911 | orchestrator | Saturday 04 April 2026 00:35:53 +0000 (0:00:00.196) 0:00:05.781 ******** 2026-04-04 00:35:55.806927 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:35:55.806941 | orchestrator | 2026-04-04 00:35:55.806956 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.806972 | orchestrator | Saturday 04 April 2026 00:35:54 +0000 (0:00:01.136) 0:00:06.917 ******** 2026-04-04 00:35:55.806986 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:35:55.806996 | orchestrator | 2026-04-04 00:35:55.807004 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-04 00:35:55.807013 | orchestrator | 2026-04-04 00:35:55.807021 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-04 00:35:55.807030 | orchestrator | Saturday 04 April 2026 00:35:54 +0000 (0:00:00.120) 0:00:07.037 ******** 2026-04-04 00:35:55.807038 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:55.807047 | orchestrator | 2026-04-04 00:35:55.807055 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-04 00:35:55.807064 | orchestrator | Saturday 04 April 2026 00:35:54 +0000 (0:00:00.100) 0:00:07.138 ******** 2026-04-04 00:35:55.807072 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:35:55.807081 | orchestrator | 2026-04-04 00:35:55.807098 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-04 00:35:55.807107 | orchestrator | Saturday 04 April 2026 00:35:55 +0000 (0:00:01.116) 0:00:08.254 ******** 2026-04-04 00:35:55.807134 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:35:55.807143 | orchestrator | 2026-04-04 00:35:55.807152 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:35:55.807162 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807225 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807235 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807244 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807253 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807261 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:35:55.807270 | orchestrator | 2026-04-04 00:35:55.807278 | orchestrator | 2026-04-04 00:35:55.807287 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:35:55.807301 | orchestrator | Saturday 04 April 2026 00:35:55 +0000 (0:00:00.037) 0:00:08.291 ******** 2026-04-04 00:35:55.807311 | orchestrator | =============================================================================== 2026-04-04 00:35:55.807319 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 6.71s 2026-04-04 00:35:55.807328 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2026-04-04 00:35:55.807336 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2026-04-04 00:35:55.989753 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-04 00:36:07.284803 | orchestrator | 2026-04-04 00:36:07 | INFO  | Prepare task for execution of wait-for-connection. 2026-04-04 00:36:07.358235 | orchestrator | 2026-04-04 00:36:07 | INFO  | Task 92d57962-943f-499c-8722-60bde8457d81 (wait-for-connection) was prepared for execution. 2026-04-04 00:36:07.358328 | orchestrator | 2026-04-04 00:36:07 | INFO  | It takes a moment until task 92d57962-943f-499c-8722-60bde8457d81 (wait-for-connection) has been started and output is visible here. 2026-04-04 00:36:22.213701 | orchestrator | 2026-04-04 00:36:22.213806 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-04 00:36:22.213820 | orchestrator | 2026-04-04 00:36:22.213831 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-04 00:36:22.213840 | orchestrator | Saturday 04 April 2026 00:36:10 +0000 (0:00:00.279) 0:00:00.279 ******** 2026-04-04 00:36:22.213849 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:36:22.213859 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:36:22.213868 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:36:22.213877 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:36:22.213887 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:36:22.213895 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:36:22.213904 | orchestrator | 2026-04-04 00:36:22.213913 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:36:22.213923 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.213945 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.213980 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.213990 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.213999 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.214007 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:36:22.214071 | orchestrator | 2026-04-04 00:36:22.214082 | orchestrator | 2026-04-04 00:36:22.214091 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:36:22.214100 | orchestrator | Saturday 04 April 2026 00:36:21 +0000 (0:00:11.550) 0:00:11.830 ******** 2026-04-04 00:36:22.214108 | orchestrator | =============================================================================== 2026-04-04 00:36:22.214117 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-04-04 00:36:22.385483 | orchestrator | + osism apply hddtemp 2026-04-04 00:36:33.741814 | orchestrator | 2026-04-04 00:36:33 | INFO  | Prepare task for execution of hddtemp. 2026-04-04 00:36:33.823941 | orchestrator | 2026-04-04 00:36:33 | INFO  | Task 486a581a-0eb6-4fea-a141-01a86a2413e2 (hddtemp) was prepared for execution. 2026-04-04 00:36:33.824037 | orchestrator | 2026-04-04 00:36:33 | INFO  | It takes a moment until task 486a581a-0eb6-4fea-a141-01a86a2413e2 (hddtemp) has been started and output is visible here. 2026-04-04 00:37:01.665898 | orchestrator | 2026-04-04 00:37:01.666008 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-04 00:37:01.666171 | orchestrator | 2026-04-04 00:37:01.666190 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-04 00:37:01.666202 | orchestrator | Saturday 04 April 2026 00:36:37 +0000 (0:00:00.341) 0:00:00.341 ******** 2026-04-04 00:37:01.666214 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:01.666226 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:01.666237 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:01.666248 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:01.666258 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:01.666269 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:01.666280 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:01.666291 | orchestrator | 2026-04-04 00:37:01.666302 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-04 00:37:01.666313 | orchestrator | Saturday 04 April 2026 00:36:37 +0000 (0:00:00.581) 0:00:00.923 ******** 2026-04-04 00:37:01.666327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:37:01.666341 | orchestrator | 2026-04-04 00:37:01.666352 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-04 00:37:01.666363 | orchestrator | Saturday 04 April 2026 00:36:38 +0000 (0:00:01.135) 0:00:02.059 ******** 2026-04-04 00:37:01.666374 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:01.666402 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:01.666413 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:01.666424 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:01.666435 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:01.666446 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:01.666456 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:01.666467 | orchestrator | 2026-04-04 00:37:01.666478 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-04 00:37:01.666489 | orchestrator | Saturday 04 April 2026 00:36:41 +0000 (0:00:02.427) 0:00:04.487 ******** 2026-04-04 00:37:01.666500 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:01.666540 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:01.666552 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:01.666563 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:01.666574 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:01.666584 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:01.666595 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:01.666606 | orchestrator | 2026-04-04 00:37:01.666617 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-04 00:37:01.666628 | orchestrator | Saturday 04 April 2026 00:36:42 +0000 (0:00:00.941) 0:00:05.428 ******** 2026-04-04 00:37:01.666639 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:37:01.666649 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:37:01.666660 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:37:01.666671 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:37:01.666681 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:37:01.666692 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:37:01.666703 | orchestrator | ok: [testbed-manager] 2026-04-04 00:37:01.666713 | orchestrator | 2026-04-04 00:37:01.666724 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-04 00:37:01.666735 | orchestrator | Saturday 04 April 2026 00:36:44 +0000 (0:00:01.976) 0:00:07.405 ******** 2026-04-04 00:37:01.666746 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:37:01.666757 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:37:01.666767 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:37:01.666778 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:37:01.666788 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:37:01.666799 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:01.666809 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:37:01.666820 | orchestrator | 2026-04-04 00:37:01.666831 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-04 00:37:01.666842 | orchestrator | Saturday 04 April 2026 00:36:44 +0000 (0:00:00.579) 0:00:07.984 ******** 2026-04-04 00:37:01.666852 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:01.666863 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:01.666874 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:01.666885 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:01.666895 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:01.666907 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:01.666917 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:01.666928 | orchestrator | 2026-04-04 00:37:01.666939 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-04 00:37:01.666950 | orchestrator | Saturday 04 April 2026 00:36:58 +0000 (0:00:13.413) 0:00:21.398 ******** 2026-04-04 00:37:01.666961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:37:01.666972 | orchestrator | 2026-04-04 00:37:01.666983 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-04 00:37:01.666994 | orchestrator | Saturday 04 April 2026 00:36:59 +0000 (0:00:01.205) 0:00:22.603 ******** 2026-04-04 00:37:01.667005 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:37:01.667016 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:37:01.667027 | orchestrator | changed: [testbed-manager] 2026-04-04 00:37:01.667038 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:37:01.667048 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:37:01.667059 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:37:01.667069 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:37:01.667107 | orchestrator | 2026-04-04 00:37:01.667126 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:37:01.667138 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:37:01.667182 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667194 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667205 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667216 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667227 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667238 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:37:01.667248 | orchestrator | 2026-04-04 00:37:01.667259 | orchestrator | 2026-04-04 00:37:01.667270 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:37:01.667281 | orchestrator | Saturday 04 April 2026 00:37:01 +0000 (0:00:01.981) 0:00:24.585 ******** 2026-04-04 00:37:01.667298 | orchestrator | =============================================================================== 2026-04-04 00:37:01.667309 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.41s 2026-04-04 00:37:01.667320 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.43s 2026-04-04 00:37:01.667331 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.98s 2026-04-04 00:37:01.667341 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.98s 2026-04-04 00:37:01.667352 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2026-04-04 00:37:01.667363 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.14s 2026-04-04 00:37:01.667373 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.94s 2026-04-04 00:37:01.667384 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2026-04-04 00:37:01.667395 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.58s 2026-04-04 00:37:01.850612 | orchestrator | ++ semver latest 7.1.1 2026-04-04 00:37:01.904936 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:37:01.905031 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:37:01.905046 | orchestrator | + sudo systemctl restart manager.service 2026-04-04 00:37:15.939785 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 00:37:15.939897 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-04 00:37:15.939915 | orchestrator | + local max_attempts=60 2026-04-04 00:37:15.939927 | orchestrator | + local name=ceph-ansible 2026-04-04 00:37:15.939934 | orchestrator | + local attempt_num=1 2026-04-04 00:37:15.939941 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:15.968850 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:15.968942 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:15.968959 | orchestrator | + sleep 5 2026-04-04 00:37:20.974697 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:21.000606 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:21.000712 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:21.000733 | orchestrator | + sleep 5 2026-04-04 00:37:26.004173 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:26.041926 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:26.042134 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:26.042152 | orchestrator | + sleep 5 2026-04-04 00:37:31.046246 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:31.086417 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:31.086520 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:31.086608 | orchestrator | + sleep 5 2026-04-04 00:37:36.090482 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:36.127224 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:36.127324 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:36.127336 | orchestrator | + sleep 5 2026-04-04 00:37:41.131103 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:41.164414 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:41.164546 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:41.164576 | orchestrator | + sleep 5 2026-04-04 00:37:46.168608 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:46.208919 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:46.209068 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:46.209085 | orchestrator | + sleep 5 2026-04-04 00:37:51.212868 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:51.243774 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:51.243878 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:51.243895 | orchestrator | + sleep 5 2026-04-04 00:37:56.246950 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:37:56.285833 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:37:56.285921 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:37:56.285931 | orchestrator | + sleep 5 2026-04-04 00:38:01.291450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:38:01.334988 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:01.335118 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:38:01.335135 | orchestrator | + sleep 5 2026-04-04 00:38:06.339082 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:38:06.376370 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:06.376470 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:38:06.376484 | orchestrator | + sleep 5 2026-04-04 00:38:11.381637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:38:11.415474 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:11.415578 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:38:11.415595 | orchestrator | + sleep 5 2026-04-04 00:38:16.420465 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:38:16.458188 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:16.458238 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-04 00:38:16.458249 | orchestrator | + sleep 5 2026-04-04 00:38:21.462811 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-04 00:38:21.496123 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:21.496373 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-04 00:38:21.496399 | orchestrator | + local max_attempts=60 2026-04-04 00:38:21.496411 | orchestrator | + local name=kolla-ansible 2026-04-04 00:38:21.496423 | orchestrator | + local attempt_num=1 2026-04-04 00:38:21.496445 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-04 00:38:21.526710 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:21.526815 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-04 00:38:21.526836 | orchestrator | + local max_attempts=60 2026-04-04 00:38:21.526858 | orchestrator | + local name=osism-ansible 2026-04-04 00:38:21.526876 | orchestrator | + local attempt_num=1 2026-04-04 00:38:21.527264 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-04 00:38:21.554932 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-04 00:38:21.555097 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-04 00:38:21.555125 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-04 00:38:21.700900 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-04 00:38:21.809233 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-04 00:38:21.950117 | orchestrator | ARA in osism-ansible already disabled. 2026-04-04 00:38:22.071471 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-04 00:38:22.071571 | orchestrator | + osism apply gather-facts 2026-04-04 00:38:33.277429 | orchestrator | 2026-04-04 00:38:33 | INFO  | Prepare task for execution of gather-facts. 2026-04-04 00:38:33.344762 | orchestrator | 2026-04-04 00:38:33 | INFO  | Task db354fb8-4f8a-4c48-bebf-95f90f4d9cdb (gather-facts) was prepared for execution. 2026-04-04 00:38:33.344895 | orchestrator | 2026-04-04 00:38:33 | INFO  | It takes a moment until task db354fb8-4f8a-4c48-bebf-95f90f4d9cdb (gather-facts) has been started and output is visible here. 2026-04-04 00:38:45.509820 | orchestrator | 2026-04-04 00:38:45.509905 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:38:45.509914 | orchestrator | 2026-04-04 00:38:45.509919 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:38:45.509940 | orchestrator | Saturday 04 April 2026 00:38:36 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-04-04 00:38:45.510053 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:38:45.510059 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:38:45.510064 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:38:45.510068 | orchestrator | ok: [testbed-manager] 2026-04-04 00:38:45.510072 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:38:45.510076 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:38:45.510080 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:38:45.510084 | orchestrator | 2026-04-04 00:38:45.510088 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:38:45.510092 | orchestrator | 2026-04-04 00:38:45.510096 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:38:45.510100 | orchestrator | Saturday 04 April 2026 00:38:44 +0000 (0:00:08.549) 0:00:08.808 ******** 2026-04-04 00:38:45.510104 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:38:45.510109 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:38:45.510113 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:38:45.510117 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:38:45.510121 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:38:45.510125 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:38:45.510128 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:38:45.510132 | orchestrator | 2026-04-04 00:38:45.510136 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:38:45.510140 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510145 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510149 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510153 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510156 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510160 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510164 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:38:45.510168 | orchestrator | 2026-04-04 00:38:45.510172 | orchestrator | 2026-04-04 00:38:45.510176 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:38:45.510180 | orchestrator | Saturday 04 April 2026 00:38:45 +0000 (0:00:00.536) 0:00:09.345 ******** 2026-04-04 00:38:45.510184 | orchestrator | =============================================================================== 2026-04-04 00:38:45.510187 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.55s 2026-04-04 00:38:45.510191 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-04-04 00:38:45.671393 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-04 00:38:45.690719 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-04 00:38:45.709635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-04 00:38:45.727559 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-04 00:38:45.741310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-04 00:38:45.760481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-04 00:38:45.777569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-04 00:38:45.796397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-04 00:38:45.814610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-04 00:38:45.833360 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-04 00:38:45.847460 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-04 00:38:45.867389 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-04 00:38:45.884094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-04 00:38:45.903360 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-04 00:38:45.922777 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-04 00:38:45.943568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-04 00:38:45.962867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-04 00:38:45.980743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-04 00:38:46.002806 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-04 00:38:46.020357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-04 00:38:46.038182 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-04 00:38:46.054804 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-04 00:38:46.073662 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-04 00:38:46.094284 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-04 00:38:46.384530 | orchestrator | ok: Runtime: 0:23:58.592228 2026-04-04 00:38:46.516019 | 2026-04-04 00:38:46.516224 | TASK [Deploy services] 2026-04-04 00:38:47.053556 | orchestrator | skipping: Conditional result was False 2026-04-04 00:38:47.072798 | 2026-04-04 00:38:47.073046 | TASK [Deploy in a nutshell] 2026-04-04 00:38:47.775106 | orchestrator | + set -e 2026-04-04 00:38:47.775319 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 00:38:47.775345 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 00:38:47.775367 | orchestrator | ++ INTERACTIVE=false 2026-04-04 00:38:47.775380 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 00:38:47.775393 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 00:38:47.775406 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 00:38:47.776131 | orchestrator | 2026-04-04 00:38:47.776176 | orchestrator | # PULL IMAGES 2026-04-04 00:38:47.776194 | orchestrator | 2026-04-04 00:38:47.776208 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 00:38:47.776230 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 00:38:47.776245 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 00:38:47.776264 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 00:38:47.776295 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 00:38:47.776316 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 00:38:47.776328 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 00:38:47.776346 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 00:38:47.776360 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 00:38:47.776374 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 00:38:47.776387 | orchestrator | ++ export ARA=false 2026-04-04 00:38:47.776398 | orchestrator | ++ ARA=false 2026-04-04 00:38:47.776409 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 00:38:47.776420 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 00:38:47.776431 | orchestrator | ++ export TEMPEST=true 2026-04-04 00:38:47.776442 | orchestrator | ++ TEMPEST=true 2026-04-04 00:38:47.776452 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 00:38:47.776463 | orchestrator | ++ IS_ZUUL=true 2026-04-04 00:38:47.776474 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:38:47.776485 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 00:38:47.776496 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 00:38:47.776506 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 00:38:47.776517 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 00:38:47.776528 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 00:38:47.776539 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 00:38:47.776550 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 00:38:47.776561 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 00:38:47.776571 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 00:38:47.776582 | orchestrator | + echo 2026-04-04 00:38:47.776593 | orchestrator | + echo '# PULL IMAGES' 2026-04-04 00:38:47.776604 | orchestrator | + echo 2026-04-04 00:38:47.776622 | orchestrator | ++ semver latest 7.0.0 2026-04-04 00:38:47.831425 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 00:38:47.831540 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 00:38:47.831560 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-04 00:38:49.037490 | orchestrator | 2026-04-04 00:38:49 | INFO  | Trying to run play pull-images in environment custom 2026-04-04 00:38:59.103471 | orchestrator | 2026-04-04 00:38:59 | INFO  | Prepare task for execution of pull-images. 2026-04-04 00:38:59.180585 | orchestrator | 2026-04-04 00:38:59 | INFO  | Task 12f7af78-e2a7-4b15-8542-b50cac88f1cd (pull-images) was prepared for execution. 2026-04-04 00:38:59.180697 | orchestrator | 2026-04-04 00:38:59 | INFO  | Task 12f7af78-e2a7-4b15-8542-b50cac88f1cd is running in background. No more output. Check ARA for logs. 2026-04-04 00:39:00.484092 | orchestrator | 2026-04-04 00:39:00 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-04 00:39:10.550818 | orchestrator | 2026-04-04 00:39:10 | INFO  | Prepare task for execution of wipe-partitions. 2026-04-04 00:39:10.624395 | orchestrator | 2026-04-04 00:39:10 | INFO  | Task 3e3281fa-28eb-4601-869b-48ca6e3b4d25 (wipe-partitions) was prepared for execution. 2026-04-04 00:39:10.624478 | orchestrator | 2026-04-04 00:39:10 | INFO  | It takes a moment until task 3e3281fa-28eb-4601-869b-48ca6e3b4d25 (wipe-partitions) has been started and output is visible here. 2026-04-04 00:39:22.042149 | orchestrator | 2026-04-04 00:39:22.042232 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-04 00:39:22.042239 | orchestrator | 2026-04-04 00:39:22.042244 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-04 00:39:22.042251 | orchestrator | Saturday 04 April 2026 00:39:13 +0000 (0:00:00.159) 0:00:00.159 ******** 2026-04-04 00:39:22.042273 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:39:22.042279 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:39:22.042286 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:39:22.042292 | orchestrator | 2026-04-04 00:39:22.042298 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-04 00:39:22.042304 | orchestrator | Saturday 04 April 2026 00:39:14 +0000 (0:00:00.991) 0:00:01.150 ******** 2026-04-04 00:39:22.042314 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:22.042320 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:39:22.042327 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:39:22.042333 | orchestrator | 2026-04-04 00:39:22.042339 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-04 00:39:22.042346 | orchestrator | Saturday 04 April 2026 00:39:14 +0000 (0:00:00.237) 0:00:01.388 ******** 2026-04-04 00:39:22.042351 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:39:22.042359 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:39:22.042364 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:39:22.042370 | orchestrator | 2026-04-04 00:39:22.042377 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-04 00:39:22.042383 | orchestrator | Saturday 04 April 2026 00:39:15 +0000 (0:00:00.540) 0:00:01.929 ******** 2026-04-04 00:39:22.042389 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:22.042394 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:39:22.042400 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:39:22.042406 | orchestrator | 2026-04-04 00:39:22.042413 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-04 00:39:22.042419 | orchestrator | Saturday 04 April 2026 00:39:15 +0000 (0:00:00.213) 0:00:02.143 ******** 2026-04-04 00:39:22.042426 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:39:22.042435 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:39:22.042442 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:39:22.042448 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:39:22.042452 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:39:22.042455 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:39:22.042459 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:39:22.042463 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:39:22.042467 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:39:22.042471 | orchestrator | 2026-04-04 00:39:22.042475 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-04 00:39:22.042479 | orchestrator | Saturday 04 April 2026 00:39:17 +0000 (0:00:01.356) 0:00:03.500 ******** 2026-04-04 00:39:22.042483 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:39:22.042487 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:39:22.042491 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:39:22.042494 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:39:22.042498 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:39:22.042502 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:39:22.042506 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:39:22.042509 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:39:22.042513 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:39:22.042517 | orchestrator | 2026-04-04 00:39:22.042521 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-04 00:39:22.042524 | orchestrator | Saturday 04 April 2026 00:39:18 +0000 (0:00:01.424) 0:00:04.925 ******** 2026-04-04 00:39:22.042528 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-04 00:39:22.042532 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-04 00:39:22.042536 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-04 00:39:22.042544 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-04 00:39:22.042554 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-04 00:39:22.042557 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-04 00:39:22.042561 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-04 00:39:22.042565 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-04 00:39:22.042569 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-04 00:39:22.042572 | orchestrator | 2026-04-04 00:39:22.042576 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-04 00:39:22.042580 | orchestrator | Saturday 04 April 2026 00:39:20 +0000 (0:00:02.044) 0:00:06.969 ******** 2026-04-04 00:39:22.042584 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:39:22.042588 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:39:22.042591 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:39:22.042595 | orchestrator | 2026-04-04 00:39:22.042599 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-04 00:39:22.042603 | orchestrator | Saturday 04 April 2026 00:39:21 +0000 (0:00:00.573) 0:00:07.543 ******** 2026-04-04 00:39:22.042607 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:39:22.042610 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:39:22.042614 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:39:22.042618 | orchestrator | 2026-04-04 00:39:22.042622 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:39:22.042627 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:22.042633 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:22.042649 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:22.042654 | orchestrator | 2026-04-04 00:39:22.042658 | orchestrator | 2026-04-04 00:39:22.042663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:39:22.042668 | orchestrator | Saturday 04 April 2026 00:39:21 +0000 (0:00:00.753) 0:00:08.296 ******** 2026-04-04 00:39:22.042672 | orchestrator | =============================================================================== 2026-04-04 00:39:22.042677 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-04-04 00:39:22.042681 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.42s 2026-04-04 00:39:22.042686 | orchestrator | Check device availability ----------------------------------------------- 1.36s 2026-04-04 00:39:22.042690 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.99s 2026-04-04 00:39:22.042694 | orchestrator | Request device events from the kernel ----------------------------------- 0.75s 2026-04-04 00:39:22.042699 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2026-04-04 00:39:22.042704 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-04-04 00:39:22.042708 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2026-04-04 00:39:22.042713 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-04-04 00:39:33.442243 | orchestrator | 2026-04-04 00:39:33 | INFO  | Prepare task for execution of facts. 2026-04-04 00:39:33.512991 | orchestrator | 2026-04-04 00:39:33 | INFO  | Task 7438728e-7610-4abc-9d1d-4ce00e40fd07 (facts) was prepared for execution. 2026-04-04 00:39:33.513095 | orchestrator | 2026-04-04 00:39:33 | INFO  | It takes a moment until task 7438728e-7610-4abc-9d1d-4ce00e40fd07 (facts) has been started and output is visible here. 2026-04-04 00:39:44.512448 | orchestrator | 2026-04-04 00:39:44.512561 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 00:39:44.512580 | orchestrator | 2026-04-04 00:39:44.512618 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:39:44.512631 | orchestrator | Saturday 04 April 2026 00:39:36 +0000 (0:00:00.290) 0:00:00.290 ******** 2026-04-04 00:39:44.512641 | orchestrator | ok: [testbed-manager] 2026-04-04 00:39:44.512654 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:39:44.512665 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:39:44.512676 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:39:44.512687 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:39:44.512698 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:39:44.512709 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:39:44.512718 | orchestrator | 2026-04-04 00:39:44.512739 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:39:44.512746 | orchestrator | Saturday 04 April 2026 00:39:37 +0000 (0:00:01.245) 0:00:01.535 ******** 2026-04-04 00:39:44.512753 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:39:44.512759 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:39:44.512766 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:39:44.512772 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:39:44.512778 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:44.512784 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:39:44.512790 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:39:44.512796 | orchestrator | 2026-04-04 00:39:44.512802 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:39:44.512808 | orchestrator | 2026-04-04 00:39:44.512815 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:39:44.512822 | orchestrator | Saturday 04 April 2026 00:39:38 +0000 (0:00:01.166) 0:00:02.702 ******** 2026-04-04 00:39:44.512828 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:39:44.512834 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:39:44.512841 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:39:44.512847 | orchestrator | ok: [testbed-manager] 2026-04-04 00:39:44.512853 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:39:44.512862 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:39:44.512872 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:39:44.512932 | orchestrator | 2026-04-04 00:39:44.512946 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:39:44.512957 | orchestrator | 2026-04-04 00:39:44.512966 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:39:44.512977 | orchestrator | Saturday 04 April 2026 00:39:43 +0000 (0:00:04.836) 0:00:07.538 ******** 2026-04-04 00:39:44.512987 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:39:44.512998 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:39:44.513007 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:39:44.513016 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:39:44.513026 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:44.513036 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:39:44.513047 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:39:44.513058 | orchestrator | 2026-04-04 00:39:44.513068 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:39:44.513079 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513091 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513101 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513112 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513123 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513144 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513154 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:39:44.513165 | orchestrator | 2026-04-04 00:39:44.513176 | orchestrator | 2026-04-04 00:39:44.513186 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:39:44.513195 | orchestrator | Saturday 04 April 2026 00:39:44 +0000 (0:00:00.504) 0:00:08.043 ******** 2026-04-04 00:39:44.513201 | orchestrator | =============================================================================== 2026-04-04 00:39:44.513207 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2026-04-04 00:39:44.513213 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-04-04 00:39:44.513219 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-04-04 00:39:44.513225 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-04 00:39:45.941099 | orchestrator | 2026-04-04 00:39:45 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-04-04 00:39:46.001278 | orchestrator | 2026-04-04 00:39:45 | INFO  | Task da4fbf0f-5a68-4007-999d-32094ccaa580 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-04 00:39:46.001368 | orchestrator | 2026-04-04 00:39:45 | INFO  | It takes a moment until task da4fbf0f-5a68-4007-999d-32094ccaa580 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-04 00:39:56.629825 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:39:56.629963 | orchestrator | 2.16.14 2026-04-04 00:39:56.629977 | orchestrator | 2026-04-04 00:39:56.629994 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:39:56.630002 | orchestrator | 2026-04-04 00:39:56.630009 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:39:56.630056 | orchestrator | Saturday 04 April 2026 00:39:50 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-04-04 00:39:56.630064 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:39:56.630071 | orchestrator | 2026-04-04 00:39:56.630078 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:39:56.630087 | orchestrator | Saturday 04 April 2026 00:39:50 +0000 (0:00:00.241) 0:00:00.494 ******** 2026-04-04 00:39:56.630097 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:39:56.630107 | orchestrator | 2026-04-04 00:39:56.630116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630126 | orchestrator | Saturday 04 April 2026 00:39:50 +0000 (0:00:00.211) 0:00:00.706 ******** 2026-04-04 00:39:56.630136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:39:56.630145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:39:56.630155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:39:56.630165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:39:56.630175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:39:56.630186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:39:56.630198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:39:56.630204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:39:56.630210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-04 00:39:56.630217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:39:56.630241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:39:56.630252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:39:56.630261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:39:56.630270 | orchestrator | 2026-04-04 00:39:56.630279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630289 | orchestrator | Saturday 04 April 2026 00:39:50 +0000 (0:00:00.320) 0:00:01.027 ******** 2026-04-04 00:39:56.630299 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630308 | orchestrator | 2026-04-04 00:39:56.630317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630328 | orchestrator | Saturday 04 April 2026 00:39:51 +0000 (0:00:00.393) 0:00:01.421 ******** 2026-04-04 00:39:56.630337 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630347 | orchestrator | 2026-04-04 00:39:56.630355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630365 | orchestrator | Saturday 04 April 2026 00:39:51 +0000 (0:00:00.171) 0:00:01.592 ******** 2026-04-04 00:39:56.630372 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630378 | orchestrator | 2026-04-04 00:39:56.630385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630394 | orchestrator | Saturday 04 April 2026 00:39:51 +0000 (0:00:00.182) 0:00:01.774 ******** 2026-04-04 00:39:56.630406 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630416 | orchestrator | 2026-04-04 00:39:56.630427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630437 | orchestrator | Saturday 04 April 2026 00:39:51 +0000 (0:00:00.172) 0:00:01.947 ******** 2026-04-04 00:39:56.630446 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630453 | orchestrator | 2026-04-04 00:39:56.630493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630505 | orchestrator | Saturday 04 April 2026 00:39:51 +0000 (0:00:00.167) 0:00:02.115 ******** 2026-04-04 00:39:56.630516 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630527 | orchestrator | 2026-04-04 00:39:56.630537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630548 | orchestrator | Saturday 04 April 2026 00:39:52 +0000 (0:00:00.156) 0:00:02.272 ******** 2026-04-04 00:39:56.630556 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630563 | orchestrator | 2026-04-04 00:39:56.630570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630580 | orchestrator | Saturday 04 April 2026 00:39:52 +0000 (0:00:00.185) 0:00:02.457 ******** 2026-04-04 00:39:56.630590 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630600 | orchestrator | 2026-04-04 00:39:56.630610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630620 | orchestrator | Saturday 04 April 2026 00:39:52 +0000 (0:00:00.173) 0:00:02.631 ******** 2026-04-04 00:39:56.630632 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae) 2026-04-04 00:39:56.630644 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae) 2026-04-04 00:39:56.630655 | orchestrator | 2026-04-04 00:39:56.630662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630685 | orchestrator | Saturday 04 April 2026 00:39:52 +0000 (0:00:00.360) 0:00:02.992 ******** 2026-04-04 00:39:56.630691 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8) 2026-04-04 00:39:56.630697 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8) 2026-04-04 00:39:56.630702 | orchestrator | 2026-04-04 00:39:56.630708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630721 | orchestrator | Saturday 04 April 2026 00:39:53 +0000 (0:00:00.377) 0:00:03.369 ******** 2026-04-04 00:39:56.630727 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737) 2026-04-04 00:39:56.630733 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737) 2026-04-04 00:39:56.630738 | orchestrator | 2026-04-04 00:39:56.630744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630750 | orchestrator | Saturday 04 April 2026 00:39:53 +0000 (0:00:00.677) 0:00:04.047 ******** 2026-04-04 00:39:56.630756 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9) 2026-04-04 00:39:56.630761 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9) 2026-04-04 00:39:56.630767 | orchestrator | 2026-04-04 00:39:56.630773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:39:56.630778 | orchestrator | Saturday 04 April 2026 00:39:54 +0000 (0:00:00.567) 0:00:04.615 ******** 2026-04-04 00:39:56.630784 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:39:56.630790 | orchestrator | 2026-04-04 00:39:56.630795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.630801 | orchestrator | Saturday 04 April 2026 00:39:55 +0000 (0:00:00.567) 0:00:05.183 ******** 2026-04-04 00:39:56.630812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:39:56.630818 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:39:56.630824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:39:56.630829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:39:56.630835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:39:56.630841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:39:56.630846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:39:56.630852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:39:56.630857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-04 00:39:56.630863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:39:56.630887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:39:56.630897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:39:56.630906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:39:56.630916 | orchestrator | 2026-04-04 00:39:56.630925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.630935 | orchestrator | Saturday 04 April 2026 00:39:55 +0000 (0:00:00.329) 0:00:05.512 ******** 2026-04-04 00:39:56.630945 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630953 | orchestrator | 2026-04-04 00:39:56.630959 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.630965 | orchestrator | Saturday 04 April 2026 00:39:55 +0000 (0:00:00.189) 0:00:05.702 ******** 2026-04-04 00:39:56.630970 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.630976 | orchestrator | 2026-04-04 00:39:56.630982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.630987 | orchestrator | Saturday 04 April 2026 00:39:55 +0000 (0:00:00.183) 0:00:05.885 ******** 2026-04-04 00:39:56.630993 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.631004 | orchestrator | 2026-04-04 00:39:56.631010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.631015 | orchestrator | Saturday 04 April 2026 00:39:55 +0000 (0:00:00.199) 0:00:06.085 ******** 2026-04-04 00:39:56.631021 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.631026 | orchestrator | 2026-04-04 00:39:56.631032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.631038 | orchestrator | Saturday 04 April 2026 00:39:56 +0000 (0:00:00.174) 0:00:06.259 ******** 2026-04-04 00:39:56.631043 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.631049 | orchestrator | 2026-04-04 00:39:56.631058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.631064 | orchestrator | Saturday 04 April 2026 00:39:56 +0000 (0:00:00.199) 0:00:06.459 ******** 2026-04-04 00:39:56.631070 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.631078 | orchestrator | 2026-04-04 00:39:56.631087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:39:56.631096 | orchestrator | Saturday 04 April 2026 00:39:56 +0000 (0:00:00.178) 0:00:06.637 ******** 2026-04-04 00:39:56.631105 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:39:56.631114 | orchestrator | 2026-04-04 00:39:56.631131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.713924 | orchestrator | Saturday 04 April 2026 00:39:56 +0000 (0:00:00.164) 0:00:06.802 ******** 2026-04-04 00:40:03.714122 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714156 | orchestrator | 2026-04-04 00:40:03.714178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.714198 | orchestrator | Saturday 04 April 2026 00:39:56 +0000 (0:00:00.165) 0:00:06.967 ******** 2026-04-04 00:40:03.714216 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-04 00:40:03.714235 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-04 00:40:03.714254 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-04 00:40:03.714272 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-04 00:40:03.714290 | orchestrator | 2026-04-04 00:40:03.714309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.714327 | orchestrator | Saturday 04 April 2026 00:39:57 +0000 (0:00:00.784) 0:00:07.751 ******** 2026-04-04 00:40:03.714345 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714364 | orchestrator | 2026-04-04 00:40:03.714382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.714400 | orchestrator | Saturday 04 April 2026 00:39:57 +0000 (0:00:00.165) 0:00:07.917 ******** 2026-04-04 00:40:03.714419 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714436 | orchestrator | 2026-04-04 00:40:03.714455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.714475 | orchestrator | Saturday 04 April 2026 00:39:57 +0000 (0:00:00.175) 0:00:08.093 ******** 2026-04-04 00:40:03.714495 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714517 | orchestrator | 2026-04-04 00:40:03.714541 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:03.714565 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.180) 0:00:08.273 ******** 2026-04-04 00:40:03.714585 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714605 | orchestrator | 2026-04-04 00:40:03.714624 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:40:03.714643 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.193) 0:00:08.467 ******** 2026-04-04 00:40:03.714663 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:40:03.714682 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:40:03.714701 | orchestrator | 2026-04-04 00:40:03.714719 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:40:03.714738 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.153) 0:00:08.620 ******** 2026-04-04 00:40:03.714786 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.714805 | orchestrator | 2026-04-04 00:40:03.714824 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:40:03.714842 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.115) 0:00:08.736 ******** 2026-04-04 00:40:03.714982 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715010 | orchestrator | 2026-04-04 00:40:03.715031 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:40:03.715049 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.133) 0:00:08.870 ******** 2026-04-04 00:40:03.715066 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715084 | orchestrator | 2026-04-04 00:40:03.715101 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:40:03.715119 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.126) 0:00:08.996 ******** 2026-04-04 00:40:03.715136 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:40:03.715154 | orchestrator | 2026-04-04 00:40:03.715171 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:40:03.715188 | orchestrator | Saturday 04 April 2026 00:39:58 +0000 (0:00:00.126) 0:00:09.122 ******** 2026-04-04 00:40:03.715206 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f0c57fe1-7323-5f70-a575-22ad75776519'}}) 2026-04-04 00:40:03.715222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e865913-a109-5f6b-9820-a5901c50a906'}}) 2026-04-04 00:40:03.715238 | orchestrator | 2026-04-04 00:40:03.715255 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:40:03.715272 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.160) 0:00:09.283 ******** 2026-04-04 00:40:03.715289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f0c57fe1-7323-5f70-a575-22ad75776519'}})  2026-04-04 00:40:03.715324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e865913-a109-5f6b-9820-a5901c50a906'}})  2026-04-04 00:40:03.715342 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715360 | orchestrator | 2026-04-04 00:40:03.715378 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:40:03.715395 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.135) 0:00:09.419 ******** 2026-04-04 00:40:03.715414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f0c57fe1-7323-5f70-a575-22ad75776519'}})  2026-04-04 00:40:03.715432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e865913-a109-5f6b-9820-a5901c50a906'}})  2026-04-04 00:40:03.715449 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715466 | orchestrator | 2026-04-04 00:40:03.715484 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:40:03.715501 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.326) 0:00:09.746 ******** 2026-04-04 00:40:03.715518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f0c57fe1-7323-5f70-a575-22ad75776519'}})  2026-04-04 00:40:03.715566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e865913-a109-5f6b-9820-a5901c50a906'}})  2026-04-04 00:40:03.715587 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715606 | orchestrator | 2026-04-04 00:40:03.715626 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:40:03.715646 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.147) 0:00:09.894 ******** 2026-04-04 00:40:03.715666 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:40:03.715687 | orchestrator | 2026-04-04 00:40:03.715706 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:40:03.715726 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.134) 0:00:10.028 ******** 2026-04-04 00:40:03.715744 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:40:03.715780 | orchestrator | 2026-04-04 00:40:03.715798 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:40:03.715817 | orchestrator | Saturday 04 April 2026 00:39:59 +0000 (0:00:00.130) 0:00:10.158 ******** 2026-04-04 00:40:03.715836 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715856 | orchestrator | 2026-04-04 00:40:03.715917 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:40:03.715937 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.121) 0:00:10.280 ******** 2026-04-04 00:40:03.715954 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.715972 | orchestrator | 2026-04-04 00:40:03.715989 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:40:03.716007 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.131) 0:00:10.412 ******** 2026-04-04 00:40:03.716024 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.716042 | orchestrator | 2026-04-04 00:40:03.716060 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:40:03.716078 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.131) 0:00:10.543 ******** 2026-04-04 00:40:03.716097 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:40:03.716115 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:03.716134 | orchestrator |  "sdb": { 2026-04-04 00:40:03.716152 | orchestrator |  "osd_lvm_uuid": "f0c57fe1-7323-5f70-a575-22ad75776519" 2026-04-04 00:40:03.716169 | orchestrator |  }, 2026-04-04 00:40:03.716187 | orchestrator |  "sdc": { 2026-04-04 00:40:03.716206 | orchestrator |  "osd_lvm_uuid": "1e865913-a109-5f6b-9820-a5901c50a906" 2026-04-04 00:40:03.716225 | orchestrator |  } 2026-04-04 00:40:03.716243 | orchestrator |  } 2026-04-04 00:40:03.716262 | orchestrator | } 2026-04-04 00:40:03.716274 | orchestrator | 2026-04-04 00:40:03.716285 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:40:03.716300 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.121) 0:00:10.665 ******** 2026-04-04 00:40:03.716318 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.716338 | orchestrator | 2026-04-04 00:40:03.716364 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:40:03.716383 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.131) 0:00:10.796 ******** 2026-04-04 00:40:03.716400 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.716417 | orchestrator | 2026-04-04 00:40:03.716434 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:40:03.716452 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.122) 0:00:10.919 ******** 2026-04-04 00:40:03.716470 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:40:03.716487 | orchestrator | 2026-04-04 00:40:03.716506 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:40:03.716525 | orchestrator | Saturday 04 April 2026 00:40:00 +0000 (0:00:00.132) 0:00:11.051 ******** 2026-04-04 00:40:03.716545 | orchestrator | changed: [testbed-node-3] => { 2026-04-04 00:40:03.716564 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:40:03.716581 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:03.716593 | orchestrator |  "sdb": { 2026-04-04 00:40:03.716603 | orchestrator |  "osd_lvm_uuid": "f0c57fe1-7323-5f70-a575-22ad75776519" 2026-04-04 00:40:03.716614 | orchestrator |  }, 2026-04-04 00:40:03.716625 | orchestrator |  "sdc": { 2026-04-04 00:40:03.716636 | orchestrator |  "osd_lvm_uuid": "1e865913-a109-5f6b-9820-a5901c50a906" 2026-04-04 00:40:03.716647 | orchestrator |  } 2026-04-04 00:40:03.716657 | orchestrator |  }, 2026-04-04 00:40:03.716688 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:40:03.716700 | orchestrator |  { 2026-04-04 00:40:03.716711 | orchestrator |  "data": "osd-block-f0c57fe1-7323-5f70-a575-22ad75776519", 2026-04-04 00:40:03.716722 | orchestrator |  "data_vg": "ceph-f0c57fe1-7323-5f70-a575-22ad75776519" 2026-04-04 00:40:03.716748 | orchestrator |  }, 2026-04-04 00:40:03.716759 | orchestrator |  { 2026-04-04 00:40:03.716770 | orchestrator |  "data": "osd-block-1e865913-a109-5f6b-9820-a5901c50a906", 2026-04-04 00:40:03.716781 | orchestrator |  "data_vg": "ceph-1e865913-a109-5f6b-9820-a5901c50a906" 2026-04-04 00:40:03.716912 | orchestrator |  } 2026-04-04 00:40:03.716927 | orchestrator |  ] 2026-04-04 00:40:03.716939 | orchestrator |  } 2026-04-04 00:40:03.716950 | orchestrator | } 2026-04-04 00:40:03.716961 | orchestrator | 2026-04-04 00:40:03.716972 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:40:03.716982 | orchestrator | Saturday 04 April 2026 00:40:01 +0000 (0:00:00.203) 0:00:11.255 ******** 2026-04-04 00:40:03.716993 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:40:03.717004 | orchestrator | 2026-04-04 00:40:03.717015 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:40:03.717026 | orchestrator | 2026-04-04 00:40:03.717036 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:40:03.717047 | orchestrator | Saturday 04 April 2026 00:40:03 +0000 (0:00:02.156) 0:00:13.411 ******** 2026-04-04 00:40:03.717058 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:40:03.717069 | orchestrator | 2026-04-04 00:40:03.717091 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:40:03.717103 | orchestrator | Saturday 04 April 2026 00:40:03 +0000 (0:00:00.241) 0:00:13.652 ******** 2026-04-04 00:40:03.717114 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:40:03.717124 | orchestrator | 2026-04-04 00:40:03.717151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.170824 | orchestrator | Saturday 04 April 2026 00:40:03 +0000 (0:00:00.235) 0:00:13.888 ******** 2026-04-04 00:40:11.170957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:40:11.170974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:40:11.170985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:40:11.170995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:40:11.171005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:40:11.171014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:40:11.171024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:40:11.171038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:40:11.171048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-04 00:40:11.171058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:40:11.171068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:40:11.171077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:40:11.171087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:40:11.171097 | orchestrator | 2026-04-04 00:40:11.171108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171118 | orchestrator | Saturday 04 April 2026 00:40:04 +0000 (0:00:00.347) 0:00:14.236 ******** 2026-04-04 00:40:11.171128 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171139 | orchestrator | 2026-04-04 00:40:11.171150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171159 | orchestrator | Saturday 04 April 2026 00:40:04 +0000 (0:00:00.200) 0:00:14.437 ******** 2026-04-04 00:40:11.171193 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171203 | orchestrator | 2026-04-04 00:40:11.171213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171223 | orchestrator | Saturday 04 April 2026 00:40:04 +0000 (0:00:00.187) 0:00:14.624 ******** 2026-04-04 00:40:11.171233 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171243 | orchestrator | 2026-04-04 00:40:11.171252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171262 | orchestrator | Saturday 04 April 2026 00:40:04 +0000 (0:00:00.222) 0:00:14.847 ******** 2026-04-04 00:40:11.171272 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171281 | orchestrator | 2026-04-04 00:40:11.171291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171301 | orchestrator | Saturday 04 April 2026 00:40:04 +0000 (0:00:00.180) 0:00:15.028 ******** 2026-04-04 00:40:11.171310 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171320 | orchestrator | 2026-04-04 00:40:11.171330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171340 | orchestrator | Saturday 04 April 2026 00:40:05 +0000 (0:00:00.582) 0:00:15.611 ******** 2026-04-04 00:40:11.171351 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171362 | orchestrator | 2026-04-04 00:40:11.171373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171384 | orchestrator | Saturday 04 April 2026 00:40:05 +0000 (0:00:00.174) 0:00:15.785 ******** 2026-04-04 00:40:11.171395 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171406 | orchestrator | 2026-04-04 00:40:11.171417 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171434 | orchestrator | Saturday 04 April 2026 00:40:05 +0000 (0:00:00.193) 0:00:15.978 ******** 2026-04-04 00:40:11.171453 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.171491 | orchestrator | 2026-04-04 00:40:11.171509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171521 | orchestrator | Saturday 04 April 2026 00:40:05 +0000 (0:00:00.194) 0:00:16.173 ******** 2026-04-04 00:40:11.171532 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca) 2026-04-04 00:40:11.171544 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca) 2026-04-04 00:40:11.171555 | orchestrator | 2026-04-04 00:40:11.171584 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171596 | orchestrator | Saturday 04 April 2026 00:40:06 +0000 (0:00:00.393) 0:00:16.567 ******** 2026-04-04 00:40:11.171608 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55) 2026-04-04 00:40:11.171619 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55) 2026-04-04 00:40:11.171630 | orchestrator | 2026-04-04 00:40:11.171640 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171650 | orchestrator | Saturday 04 April 2026 00:40:06 +0000 (0:00:00.423) 0:00:16.991 ******** 2026-04-04 00:40:11.171660 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159) 2026-04-04 00:40:11.171669 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159) 2026-04-04 00:40:11.171679 | orchestrator | 2026-04-04 00:40:11.171689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171717 | orchestrator | Saturday 04 April 2026 00:40:07 +0000 (0:00:00.422) 0:00:17.413 ******** 2026-04-04 00:40:11.171728 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8) 2026-04-04 00:40:11.171738 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8) 2026-04-04 00:40:11.171747 | orchestrator | 2026-04-04 00:40:11.171765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:11.171774 | orchestrator | Saturday 04 April 2026 00:40:07 +0000 (0:00:00.431) 0:00:17.845 ******** 2026-04-04 00:40:11.171784 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:40:11.171793 | orchestrator | 2026-04-04 00:40:11.171803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.171812 | orchestrator | Saturday 04 April 2026 00:40:07 +0000 (0:00:00.329) 0:00:18.174 ******** 2026-04-04 00:40:11.171822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:40:11.171832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:40:11.171842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:40:11.171872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:40:11.171882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:40:11.171892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:40:11.171901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:40:11.171911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:40:11.171920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-04 00:40:11.171930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:40:11.171939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:40:11.171949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:40:11.171958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:40:11.171968 | orchestrator | 2026-04-04 00:40:11.171978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.171987 | orchestrator | Saturday 04 April 2026 00:40:08 +0000 (0:00:00.390) 0:00:18.565 ******** 2026-04-04 00:40:11.171997 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172006 | orchestrator | 2026-04-04 00:40:11.172016 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172026 | orchestrator | Saturday 04 April 2026 00:40:08 +0000 (0:00:00.200) 0:00:18.766 ******** 2026-04-04 00:40:11.172035 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172045 | orchestrator | 2026-04-04 00:40:11.172054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172064 | orchestrator | Saturday 04 April 2026 00:40:09 +0000 (0:00:00.636) 0:00:19.402 ******** 2026-04-04 00:40:11.172074 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172083 | orchestrator | 2026-04-04 00:40:11.172093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172103 | orchestrator | Saturday 04 April 2026 00:40:09 +0000 (0:00:00.200) 0:00:19.603 ******** 2026-04-04 00:40:11.172112 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172122 | orchestrator | 2026-04-04 00:40:11.172131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172141 | orchestrator | Saturday 04 April 2026 00:40:09 +0000 (0:00:00.196) 0:00:19.799 ******** 2026-04-04 00:40:11.172150 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172160 | orchestrator | 2026-04-04 00:40:11.172169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172179 | orchestrator | Saturday 04 April 2026 00:40:09 +0000 (0:00:00.196) 0:00:19.996 ******** 2026-04-04 00:40:11.172198 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172275 | orchestrator | 2026-04-04 00:40:11.172292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172303 | orchestrator | Saturday 04 April 2026 00:40:10 +0000 (0:00:00.209) 0:00:20.206 ******** 2026-04-04 00:40:11.172312 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172322 | orchestrator | 2026-04-04 00:40:11.172332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172342 | orchestrator | Saturday 04 April 2026 00:40:10 +0000 (0:00:00.198) 0:00:20.404 ******** 2026-04-04 00:40:11.172351 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:11.172361 | orchestrator | 2026-04-04 00:40:11.172371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172380 | orchestrator | Saturday 04 April 2026 00:40:10 +0000 (0:00:00.193) 0:00:20.598 ******** 2026-04-04 00:40:11.172390 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-04 00:40:11.172401 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-04 00:40:11.172411 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-04 00:40:11.172420 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-04 00:40:11.172430 | orchestrator | 2026-04-04 00:40:11.172440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:11.172450 | orchestrator | Saturday 04 April 2026 00:40:11 +0000 (0:00:00.636) 0:00:21.234 ******** 2026-04-04 00:40:11.172460 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469438 | orchestrator | 2026-04-04 00:40:16.469547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:16.469564 | orchestrator | Saturday 04 April 2026 00:40:11 +0000 (0:00:00.187) 0:00:21.422 ******** 2026-04-04 00:40:16.469576 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469589 | orchestrator | 2026-04-04 00:40:16.469600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:16.469611 | orchestrator | Saturday 04 April 2026 00:40:11 +0000 (0:00:00.180) 0:00:21.602 ******** 2026-04-04 00:40:16.469622 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469633 | orchestrator | 2026-04-04 00:40:16.469644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:16.469655 | orchestrator | Saturday 04 April 2026 00:40:11 +0000 (0:00:00.171) 0:00:21.774 ******** 2026-04-04 00:40:16.469666 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469677 | orchestrator | 2026-04-04 00:40:16.469688 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:40:16.469699 | orchestrator | Saturday 04 April 2026 00:40:11 +0000 (0:00:00.187) 0:00:21.961 ******** 2026-04-04 00:40:16.469710 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:40:16.469721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:40:16.469732 | orchestrator | 2026-04-04 00:40:16.469743 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:40:16.469754 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.375) 0:00:22.336 ******** 2026-04-04 00:40:16.469765 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469776 | orchestrator | 2026-04-04 00:40:16.469787 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:40:16.469798 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.134) 0:00:22.470 ******** 2026-04-04 00:40:16.469809 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469820 | orchestrator | 2026-04-04 00:40:16.469831 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:40:16.469842 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.132) 0:00:22.603 ******** 2026-04-04 00:40:16.469919 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.469931 | orchestrator | 2026-04-04 00:40:16.469942 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:40:16.469953 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.143) 0:00:22.746 ******** 2026-04-04 00:40:16.469990 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:40:16.470004 | orchestrator | 2026-04-04 00:40:16.470067 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:40:16.470081 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.132) 0:00:22.879 ******** 2026-04-04 00:40:16.470095 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}}) 2026-04-04 00:40:16.470109 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}}) 2026-04-04 00:40:16.470122 | orchestrator | 2026-04-04 00:40:16.470134 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:40:16.470147 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.149) 0:00:23.028 ******** 2026-04-04 00:40:16.470160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}})  2026-04-04 00:40:16.470174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}})  2026-04-04 00:40:16.470186 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470198 | orchestrator | 2026-04-04 00:40:16.470211 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:40:16.470223 | orchestrator | Saturday 04 April 2026 00:40:12 +0000 (0:00:00.134) 0:00:23.163 ******** 2026-04-04 00:40:16.470236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}})  2026-04-04 00:40:16.470249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}})  2026-04-04 00:40:16.470261 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470273 | orchestrator | 2026-04-04 00:40:16.470286 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:40:16.470298 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.120) 0:00:23.283 ******** 2026-04-04 00:40:16.470310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}})  2026-04-04 00:40:16.470323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}})  2026-04-04 00:40:16.470335 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470345 | orchestrator | 2026-04-04 00:40:16.470372 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:40:16.470384 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.118) 0:00:23.402 ******** 2026-04-04 00:40:16.470394 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:40:16.470405 | orchestrator | 2026-04-04 00:40:16.470416 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:40:16.470427 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.104) 0:00:23.506 ******** 2026-04-04 00:40:16.470437 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:40:16.470448 | orchestrator | 2026-04-04 00:40:16.470459 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:40:16.470470 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.111) 0:00:23.618 ******** 2026-04-04 00:40:16.470499 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470511 | orchestrator | 2026-04-04 00:40:16.470522 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:40:16.470532 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.107) 0:00:23.725 ******** 2026-04-04 00:40:16.470543 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470553 | orchestrator | 2026-04-04 00:40:16.470564 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:40:16.470574 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.258) 0:00:23.984 ******** 2026-04-04 00:40:16.470585 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470604 | orchestrator | 2026-04-04 00:40:16.470615 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:40:16.470627 | orchestrator | Saturday 04 April 2026 00:40:13 +0000 (0:00:00.139) 0:00:24.123 ******** 2026-04-04 00:40:16.470648 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:40:16.470666 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:16.470687 | orchestrator |  "sdb": { 2026-04-04 00:40:16.470715 | orchestrator |  "osd_lvm_uuid": "2f7bbb1d-c278-5154-a1d3-309d62b79a2f" 2026-04-04 00:40:16.470733 | orchestrator |  }, 2026-04-04 00:40:16.470751 | orchestrator |  "sdc": { 2026-04-04 00:40:16.470770 | orchestrator |  "osd_lvm_uuid": "b98f96ba-ddcd-5dd8-8e53-77fbcda444fa" 2026-04-04 00:40:16.470788 | orchestrator |  } 2026-04-04 00:40:16.470806 | orchestrator |  } 2026-04-04 00:40:16.470824 | orchestrator | } 2026-04-04 00:40:16.470841 | orchestrator | 2026-04-04 00:40:16.470912 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:40:16.470925 | orchestrator | Saturday 04 April 2026 00:40:14 +0000 (0:00:00.117) 0:00:24.240 ******** 2026-04-04 00:40:16.470935 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470946 | orchestrator | 2026-04-04 00:40:16.470957 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:40:16.470968 | orchestrator | Saturday 04 April 2026 00:40:14 +0000 (0:00:00.101) 0:00:24.341 ******** 2026-04-04 00:40:16.470979 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.470990 | orchestrator | 2026-04-04 00:40:16.471001 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:40:16.471011 | orchestrator | Saturday 04 April 2026 00:40:14 +0000 (0:00:00.108) 0:00:24.450 ******** 2026-04-04 00:40:16.471022 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:40:16.471033 | orchestrator | 2026-04-04 00:40:16.471044 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:40:16.471055 | orchestrator | Saturday 04 April 2026 00:40:14 +0000 (0:00:00.112) 0:00:24.563 ******** 2026-04-04 00:40:16.471066 | orchestrator | changed: [testbed-node-4] => { 2026-04-04 00:40:16.471077 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:40:16.471087 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:16.471098 | orchestrator |  "sdb": { 2026-04-04 00:40:16.471109 | orchestrator |  "osd_lvm_uuid": "2f7bbb1d-c278-5154-a1d3-309d62b79a2f" 2026-04-04 00:40:16.471120 | orchestrator |  }, 2026-04-04 00:40:16.471130 | orchestrator |  "sdc": { 2026-04-04 00:40:16.471141 | orchestrator |  "osd_lvm_uuid": "b98f96ba-ddcd-5dd8-8e53-77fbcda444fa" 2026-04-04 00:40:16.471152 | orchestrator |  } 2026-04-04 00:40:16.471163 | orchestrator |  }, 2026-04-04 00:40:16.471173 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:40:16.471184 | orchestrator |  { 2026-04-04 00:40:16.471195 | orchestrator |  "data": "osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f", 2026-04-04 00:40:16.471207 | orchestrator |  "data_vg": "ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f" 2026-04-04 00:40:16.471217 | orchestrator |  }, 2026-04-04 00:40:16.471228 | orchestrator |  { 2026-04-04 00:40:16.471239 | orchestrator |  "data": "osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa", 2026-04-04 00:40:16.471250 | orchestrator |  "data_vg": "ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa" 2026-04-04 00:40:16.471260 | orchestrator |  } 2026-04-04 00:40:16.471271 | orchestrator |  ] 2026-04-04 00:40:16.471281 | orchestrator |  } 2026-04-04 00:40:16.471292 | orchestrator | } 2026-04-04 00:40:16.471302 | orchestrator | 2026-04-04 00:40:16.471313 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:40:16.471324 | orchestrator | Saturday 04 April 2026 00:40:14 +0000 (0:00:00.172) 0:00:24.735 ******** 2026-04-04 00:40:16.471334 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:40:16.471345 | orchestrator | 2026-04-04 00:40:16.471367 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-04 00:40:16.471378 | orchestrator | 2026-04-04 00:40:16.471389 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:40:16.471400 | orchestrator | Saturday 04 April 2026 00:40:15 +0000 (0:00:00.885) 0:00:25.621 ******** 2026-04-04 00:40:16.471411 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:40:16.471422 | orchestrator | 2026-04-04 00:40:16.471433 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:40:16.471443 | orchestrator | Saturday 04 April 2026 00:40:15 +0000 (0:00:00.349) 0:00:25.971 ******** 2026-04-04 00:40:16.471454 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:40:16.471465 | orchestrator | 2026-04-04 00:40:16.471476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:16.471486 | orchestrator | Saturday 04 April 2026 00:40:16 +0000 (0:00:00.446) 0:00:26.417 ******** 2026-04-04 00:40:16.471497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:40:16.471507 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:40:16.471518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:40:16.471528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:40:16.471539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:40:16.471559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:40:24.352336 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:40:24.352444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:40:24.352460 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-04 00:40:24.352471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:40:24.352503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:40:24.352515 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:40:24.352526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:40:24.352537 | orchestrator | 2026-04-04 00:40:24.352550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352562 | orchestrator | Saturday 04 April 2026 00:40:16 +0000 (0:00:00.289) 0:00:26.707 ******** 2026-04-04 00:40:24.352573 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352585 | orchestrator | 2026-04-04 00:40:24.352597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352608 | orchestrator | Saturday 04 April 2026 00:40:16 +0000 (0:00:00.178) 0:00:26.885 ******** 2026-04-04 00:40:24.352619 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352629 | orchestrator | 2026-04-04 00:40:24.352641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352652 | orchestrator | Saturday 04 April 2026 00:40:16 +0000 (0:00:00.162) 0:00:27.048 ******** 2026-04-04 00:40:24.352662 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352673 | orchestrator | 2026-04-04 00:40:24.352684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352695 | orchestrator | Saturday 04 April 2026 00:40:17 +0000 (0:00:00.196) 0:00:27.244 ******** 2026-04-04 00:40:24.352710 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352721 | orchestrator | 2026-04-04 00:40:24.352732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352743 | orchestrator | Saturday 04 April 2026 00:40:17 +0000 (0:00:00.184) 0:00:27.429 ******** 2026-04-04 00:40:24.352779 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352790 | orchestrator | 2026-04-04 00:40:24.352801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352811 | orchestrator | Saturday 04 April 2026 00:40:17 +0000 (0:00:00.240) 0:00:27.669 ******** 2026-04-04 00:40:24.352822 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352832 | orchestrator | 2026-04-04 00:40:24.352875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352889 | orchestrator | Saturday 04 April 2026 00:40:17 +0000 (0:00:00.198) 0:00:27.868 ******** 2026-04-04 00:40:24.352901 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352914 | orchestrator | 2026-04-04 00:40:24.352927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352938 | orchestrator | Saturday 04 April 2026 00:40:17 +0000 (0:00:00.224) 0:00:28.092 ******** 2026-04-04 00:40:24.352949 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.352959 | orchestrator | 2026-04-04 00:40:24.352970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.352981 | orchestrator | Saturday 04 April 2026 00:40:18 +0000 (0:00:00.185) 0:00:28.278 ******** 2026-04-04 00:40:24.352991 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c) 2026-04-04 00:40:24.353003 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c) 2026-04-04 00:40:24.353014 | orchestrator | 2026-04-04 00:40:24.353025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.353036 | orchestrator | Saturday 04 April 2026 00:40:18 +0000 (0:00:00.571) 0:00:28.849 ******** 2026-04-04 00:40:24.353047 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae) 2026-04-04 00:40:24.353057 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae) 2026-04-04 00:40:24.353068 | orchestrator | 2026-04-04 00:40:24.353079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.353090 | orchestrator | Saturday 04 April 2026 00:40:19 +0000 (0:00:00.720) 0:00:29.570 ******** 2026-04-04 00:40:24.353100 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905) 2026-04-04 00:40:24.353111 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905) 2026-04-04 00:40:24.353122 | orchestrator | 2026-04-04 00:40:24.353132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.353143 | orchestrator | Saturday 04 April 2026 00:40:19 +0000 (0:00:00.457) 0:00:30.027 ******** 2026-04-04 00:40:24.353153 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5) 2026-04-04 00:40:24.353164 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5) 2026-04-04 00:40:24.353175 | orchestrator | 2026-04-04 00:40:24.353186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:40:24.353196 | orchestrator | Saturday 04 April 2026 00:40:20 +0000 (0:00:00.433) 0:00:30.460 ******** 2026-04-04 00:40:24.353207 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:40:24.353217 | orchestrator | 2026-04-04 00:40:24.353228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353257 | orchestrator | Saturday 04 April 2026 00:40:20 +0000 (0:00:00.310) 0:00:30.770 ******** 2026-04-04 00:40:24.353269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:40:24.353279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:40:24.353291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:40:24.353302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:40:24.353321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:40:24.353332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:40:24.353342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:40:24.353353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:40:24.353363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-04 00:40:24.353374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:40:24.353385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:40:24.353395 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:40:24.353406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:40:24.353416 | orchestrator | 2026-04-04 00:40:24.353427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353439 | orchestrator | Saturday 04 April 2026 00:40:20 +0000 (0:00:00.341) 0:00:31.112 ******** 2026-04-04 00:40:24.353449 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353460 | orchestrator | 2026-04-04 00:40:24.353471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353482 | orchestrator | Saturday 04 April 2026 00:40:21 +0000 (0:00:00.199) 0:00:31.311 ******** 2026-04-04 00:40:24.353493 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353503 | orchestrator | 2026-04-04 00:40:24.353514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353525 | orchestrator | Saturday 04 April 2026 00:40:21 +0000 (0:00:00.167) 0:00:31.479 ******** 2026-04-04 00:40:24.353535 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353547 | orchestrator | 2026-04-04 00:40:24.353558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353575 | orchestrator | Saturday 04 April 2026 00:40:21 +0000 (0:00:00.175) 0:00:31.655 ******** 2026-04-04 00:40:24.353586 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353596 | orchestrator | 2026-04-04 00:40:24.353607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353618 | orchestrator | Saturday 04 April 2026 00:40:21 +0000 (0:00:00.181) 0:00:31.837 ******** 2026-04-04 00:40:24.353629 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353640 | orchestrator | 2026-04-04 00:40:24.353651 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353661 | orchestrator | Saturday 04 April 2026 00:40:21 +0000 (0:00:00.176) 0:00:32.013 ******** 2026-04-04 00:40:24.353672 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353683 | orchestrator | 2026-04-04 00:40:24.353693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353704 | orchestrator | Saturday 04 April 2026 00:40:22 +0000 (0:00:00.522) 0:00:32.536 ******** 2026-04-04 00:40:24.353715 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353726 | orchestrator | 2026-04-04 00:40:24.353736 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353747 | orchestrator | Saturday 04 April 2026 00:40:22 +0000 (0:00:00.201) 0:00:32.738 ******** 2026-04-04 00:40:24.353758 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353768 | orchestrator | 2026-04-04 00:40:24.353779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353790 | orchestrator | Saturday 04 April 2026 00:40:22 +0000 (0:00:00.206) 0:00:32.945 ******** 2026-04-04 00:40:24.353801 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-04 00:40:24.353819 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-04 00:40:24.353830 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-04 00:40:24.353890 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-04 00:40:24.353903 | orchestrator | 2026-04-04 00:40:24.353914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353925 | orchestrator | Saturday 04 April 2026 00:40:23 +0000 (0:00:00.656) 0:00:33.601 ******** 2026-04-04 00:40:24.353936 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353947 | orchestrator | 2026-04-04 00:40:24.353958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.353968 | orchestrator | Saturday 04 April 2026 00:40:23 +0000 (0:00:00.264) 0:00:33.866 ******** 2026-04-04 00:40:24.353979 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.353989 | orchestrator | 2026-04-04 00:40:24.354000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.354011 | orchestrator | Saturday 04 April 2026 00:40:23 +0000 (0:00:00.218) 0:00:34.084 ******** 2026-04-04 00:40:24.354105 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.354121 | orchestrator | 2026-04-04 00:40:24.354132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:40:24.354224 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.254) 0:00:34.339 ******** 2026-04-04 00:40:24.354238 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:24.354248 | orchestrator | 2026-04-04 00:40:24.354269 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-04 00:40:28.071226 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.188) 0:00:34.527 ******** 2026-04-04 00:40:28.071306 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-04 00:40:28.071315 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-04 00:40:28.071322 | orchestrator | 2026-04-04 00:40:28.071329 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-04 00:40:28.071336 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.165) 0:00:34.692 ******** 2026-04-04 00:40:28.071342 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071348 | orchestrator | 2026-04-04 00:40:28.071354 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-04 00:40:28.071360 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.145) 0:00:34.838 ******** 2026-04-04 00:40:28.071366 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071372 | orchestrator | 2026-04-04 00:40:28.071378 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-04 00:40:28.071394 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.117) 0:00:34.956 ******** 2026-04-04 00:40:28.071400 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071406 | orchestrator | 2026-04-04 00:40:28.071412 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-04 00:40:28.071418 | orchestrator | Saturday 04 April 2026 00:40:24 +0000 (0:00:00.129) 0:00:35.086 ******** 2026-04-04 00:40:28.071424 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:40:28.071439 | orchestrator | 2026-04-04 00:40:28.071445 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-04 00:40:28.071450 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.361) 0:00:35.447 ******** 2026-04-04 00:40:28.071457 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '92575011-0645-5cdf-badf-43ad86ae8159'}}) 2026-04-04 00:40:28.071463 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35995e13-d19e-546f-ae20-ff296f4077c7'}}) 2026-04-04 00:40:28.071469 | orchestrator | 2026-04-04 00:40:28.071475 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-04 00:40:28.071480 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.158) 0:00:35.606 ******** 2026-04-04 00:40:28.071487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '92575011-0645-5cdf-badf-43ad86ae8159'}})  2026-04-04 00:40:28.071512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35995e13-d19e-546f-ae20-ff296f4077c7'}})  2026-04-04 00:40:28.071519 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071525 | orchestrator | 2026-04-04 00:40:28.071531 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-04 00:40:28.071536 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.137) 0:00:35.743 ******** 2026-04-04 00:40:28.071542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '92575011-0645-5cdf-badf-43ad86ae8159'}})  2026-04-04 00:40:28.071548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35995e13-d19e-546f-ae20-ff296f4077c7'}})  2026-04-04 00:40:28.071554 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071560 | orchestrator | 2026-04-04 00:40:28.071565 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-04 00:40:28.071571 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.141) 0:00:35.885 ******** 2026-04-04 00:40:28.071577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '92575011-0645-5cdf-badf-43ad86ae8159'}})  2026-04-04 00:40:28.071583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35995e13-d19e-546f-ae20-ff296f4077c7'}})  2026-04-04 00:40:28.071589 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071595 | orchestrator | 2026-04-04 00:40:28.071601 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-04 00:40:28.071606 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.158) 0:00:36.044 ******** 2026-04-04 00:40:28.071612 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:40:28.071618 | orchestrator | 2026-04-04 00:40:28.071624 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-04 00:40:28.071630 | orchestrator | Saturday 04 April 2026 00:40:25 +0000 (0:00:00.127) 0:00:36.171 ******** 2026-04-04 00:40:28.071635 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:40:28.071641 | orchestrator | 2026-04-04 00:40:28.071647 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-04 00:40:28.071653 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.248) 0:00:36.419 ******** 2026-04-04 00:40:28.071659 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071664 | orchestrator | 2026-04-04 00:40:28.071670 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-04 00:40:28.071676 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.159) 0:00:36.579 ******** 2026-04-04 00:40:28.071682 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071688 | orchestrator | 2026-04-04 00:40:28.071693 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-04 00:40:28.071699 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.135) 0:00:36.714 ******** 2026-04-04 00:40:28.071705 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071711 | orchestrator | 2026-04-04 00:40:28.071716 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-04 00:40:28.071722 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.131) 0:00:36.846 ******** 2026-04-04 00:40:28.071728 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:40:28.071734 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:28.071740 | orchestrator |  "sdb": { 2026-04-04 00:40:28.071758 | orchestrator |  "osd_lvm_uuid": "92575011-0645-5cdf-badf-43ad86ae8159" 2026-04-04 00:40:28.071764 | orchestrator |  }, 2026-04-04 00:40:28.071770 | orchestrator |  "sdc": { 2026-04-04 00:40:28.071790 | orchestrator |  "osd_lvm_uuid": "35995e13-d19e-546f-ae20-ff296f4077c7" 2026-04-04 00:40:28.071797 | orchestrator |  } 2026-04-04 00:40:28.071804 | orchestrator |  } 2026-04-04 00:40:28.071811 | orchestrator | } 2026-04-04 00:40:28.071818 | orchestrator | 2026-04-04 00:40:28.071829 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-04 00:40:28.071854 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.119) 0:00:36.966 ******** 2026-04-04 00:40:28.071864 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071874 | orchestrator | 2026-04-04 00:40:28.071883 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-04 00:40:28.071894 | orchestrator | Saturday 04 April 2026 00:40:26 +0000 (0:00:00.083) 0:00:37.050 ******** 2026-04-04 00:40:28.071904 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071914 | orchestrator | 2026-04-04 00:40:28.071923 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-04 00:40:28.071930 | orchestrator | Saturday 04 April 2026 00:40:27 +0000 (0:00:00.204) 0:00:37.255 ******** 2026-04-04 00:40:28.071937 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:40:28.071944 | orchestrator | 2026-04-04 00:40:28.071950 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-04 00:40:28.071957 | orchestrator | Saturday 04 April 2026 00:40:27 +0000 (0:00:00.106) 0:00:37.362 ******** 2026-04-04 00:40:28.071964 | orchestrator | changed: [testbed-node-5] => { 2026-04-04 00:40:28.071971 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-04 00:40:28.071978 | orchestrator |  "ceph_osd_devices": { 2026-04-04 00:40:28.071985 | orchestrator |  "sdb": { 2026-04-04 00:40:28.071992 | orchestrator |  "osd_lvm_uuid": "92575011-0645-5cdf-badf-43ad86ae8159" 2026-04-04 00:40:28.071999 | orchestrator |  }, 2026-04-04 00:40:28.072005 | orchestrator |  "sdc": { 2026-04-04 00:40:28.072016 | orchestrator |  "osd_lvm_uuid": "35995e13-d19e-546f-ae20-ff296f4077c7" 2026-04-04 00:40:28.072023 | orchestrator |  } 2026-04-04 00:40:28.072030 | orchestrator |  }, 2026-04-04 00:40:28.072037 | orchestrator |  "lvm_volumes": [ 2026-04-04 00:40:28.072043 | orchestrator |  { 2026-04-04 00:40:28.072049 | orchestrator |  "data": "osd-block-92575011-0645-5cdf-badf-43ad86ae8159", 2026-04-04 00:40:28.072055 | orchestrator |  "data_vg": "ceph-92575011-0645-5cdf-badf-43ad86ae8159" 2026-04-04 00:40:28.072061 | orchestrator |  }, 2026-04-04 00:40:28.072069 | orchestrator |  { 2026-04-04 00:40:28.072075 | orchestrator |  "data": "osd-block-35995e13-d19e-546f-ae20-ff296f4077c7", 2026-04-04 00:40:28.072081 | orchestrator |  "data_vg": "ceph-35995e13-d19e-546f-ae20-ff296f4077c7" 2026-04-04 00:40:28.072087 | orchestrator |  } 2026-04-04 00:40:28.072093 | orchestrator |  ] 2026-04-04 00:40:28.072098 | orchestrator |  } 2026-04-04 00:40:28.072104 | orchestrator | } 2026-04-04 00:40:28.072110 | orchestrator | 2026-04-04 00:40:28.072117 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-04 00:40:28.072126 | orchestrator | Saturday 04 April 2026 00:40:27 +0000 (0:00:00.177) 0:00:37.539 ******** 2026-04-04 00:40:28.072135 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:40:28.072144 | orchestrator | 2026-04-04 00:40:28.072153 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:40:28.072163 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:40:28.072173 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:40:28.072183 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 00:40:28.072192 | orchestrator | 2026-04-04 00:40:28.072200 | orchestrator | 2026-04-04 00:40:28.072209 | orchestrator | 2026-04-04 00:40:28.072218 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:40:28.072227 | orchestrator | Saturday 04 April 2026 00:40:28 +0000 (0:00:00.690) 0:00:38.229 ******** 2026-04-04 00:40:28.072243 | orchestrator | =============================================================================== 2026-04-04 00:40:28.072252 | orchestrator | Write configuration file ------------------------------------------------ 3.73s 2026-04-04 00:40:28.072261 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-04-04 00:40:28.072270 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-04-04 00:40:28.072278 | orchestrator | Get initial list of available block devices ----------------------------- 0.89s 2026-04-04 00:40:28.072287 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2026-04-04 00:40:28.072297 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-04-04 00:40:28.072306 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-04-04 00:40:28.072316 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.69s 2026-04-04 00:40:28.072325 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-04 00:40:28.072335 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-04 00:40:28.072345 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-04-04 00:40:28.072355 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2026-04-04 00:40:28.072365 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.62s 2026-04-04 00:40:28.072386 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2026-04-04 00:40:28.264333 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-04-04 00:40:28.264429 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-04 00:40:28.264444 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-04 00:40:28.264455 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-04-04 00:40:28.264466 | orchestrator | Print configuration data ------------------------------------------------ 0.55s 2026-04-04 00:40:28.264477 | orchestrator | Set WAL devices config data --------------------------------------------- 0.53s 2026-04-04 00:40:49.750160 | orchestrator | 2026-04-04 00:40:49 | INFO  | Task c17bdf16-c94f-4cbb-9325-80ff8515a989 (sync inventory) is running in background. Output coming soon. 2026-04-04 00:41:17.438271 | orchestrator | 2026-04-04 00:40:51 | INFO  | Starting group_vars file reorganization 2026-04-04 00:41:17.438370 | orchestrator | 2026-04-04 00:40:51 | INFO  | Moved 0 file(s) to their respective directories 2026-04-04 00:41:17.438384 | orchestrator | 2026-04-04 00:40:51 | INFO  | Group_vars file reorganization completed 2026-04-04 00:41:17.438392 | orchestrator | 2026-04-04 00:40:53 | INFO  | Starting variable preparation from inventory 2026-04-04 00:41:17.438399 | orchestrator | 2026-04-04 00:40:56 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-04 00:41:17.438407 | orchestrator | 2026-04-04 00:40:56 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-04 00:41:17.438414 | orchestrator | 2026-04-04 00:40:56 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-04 00:41:17.438421 | orchestrator | 2026-04-04 00:40:56 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-04 00:41:17.438428 | orchestrator | 2026-04-04 00:40:56 | INFO  | Variable preparation completed 2026-04-04 00:41:17.438435 | orchestrator | 2026-04-04 00:40:57 | INFO  | Starting inventory overwrite handling 2026-04-04 00:41:17.438442 | orchestrator | 2026-04-04 00:40:57 | INFO  | Handling group overwrites in 99-overwrite 2026-04-04 00:41:17.438450 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removing group frr:children from 60-generic 2026-04-04 00:41:17.438481 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-04 00:41:17.438488 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-04 00:41:17.438495 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-04 00:41:17.438502 | orchestrator | 2026-04-04 00:40:57 | INFO  | Handling group overwrites in 20-roles 2026-04-04 00:41:17.438509 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-04 00:41:17.438516 | orchestrator | 2026-04-04 00:40:57 | INFO  | Removed 5 group(s) in total 2026-04-04 00:41:17.438522 | orchestrator | 2026-04-04 00:40:57 | INFO  | Inventory overwrite handling completed 2026-04-04 00:41:17.438528 | orchestrator | 2026-04-04 00:40:58 | INFO  | Starting merge of inventory files 2026-04-04 00:41:17.438535 | orchestrator | 2026-04-04 00:40:58 | INFO  | Inventory files merged successfully 2026-04-04 00:41:17.438541 | orchestrator | 2026-04-04 00:41:03 | INFO  | Generating minified hosts file 2026-04-04 00:41:17.438548 | orchestrator | 2026-04-04 00:41:05 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-04 00:41:17.438556 | orchestrator | 2026-04-04 00:41:05 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-04 00:41:17.438578 | orchestrator | 2026-04-04 00:41:06 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-04 00:41:17.438587 | orchestrator | 2026-04-04 00:41:16 | INFO  | Successfully wrote ClusterShell configuration 2026-04-04 00:41:17.438595 | orchestrator | [master 7d60e73] 2026-04-04-00-41 2026-04-04 00:41:17.438604 | orchestrator | 5 files changed, 75 insertions(+), 10 deletions(-) 2026-04-04 00:41:17.438613 | orchestrator | create mode 100644 fast/host_vars/testbed-node-3/ceph-lvm-configuration.yml 2026-04-04 00:41:17.438619 | orchestrator | create mode 100644 fast/host_vars/testbed-node-4/ceph-lvm-configuration.yml 2026-04-04 00:41:17.438626 | orchestrator | create mode 100644 fast/host_vars/testbed-node-5/ceph-lvm-configuration.yml 2026-04-04 00:41:18.710746 | orchestrator | 2026-04-04 00:41:18 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-04-04 00:41:18.760013 | orchestrator | 2026-04-04 00:41:18 | INFO  | Task a3c4c318-9ce3-493e-a6ae-d64b9893caa3 (ceph-create-lvm-devices) was prepared for execution. 2026-04-04 00:41:18.760091 | orchestrator | 2026-04-04 00:41:18 | INFO  | It takes a moment until task a3c4c318-9ce3-493e-a6ae-d64b9893caa3 (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-04 00:41:29.286090 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:41:29.286214 | orchestrator | 2.16.14 2026-04-04 00:41:29.286233 | orchestrator | 2026-04-04 00:41:29.286246 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:41:29.286258 | orchestrator | 2026-04-04 00:41:29.286270 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:41:29.286282 | orchestrator | Saturday 04 April 2026 00:41:22 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-04-04 00:41:29.286294 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 00:41:29.286305 | orchestrator | 2026-04-04 00:41:29.286316 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:41:29.286336 | orchestrator | Saturday 04 April 2026 00:41:23 +0000 (0:00:00.251) 0:00:00.538 ******** 2026-04-04 00:41:29.286353 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:29.286371 | orchestrator | 2026-04-04 00:41:29.286392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286415 | orchestrator | Saturday 04 April 2026 00:41:23 +0000 (0:00:00.237) 0:00:00.775 ******** 2026-04-04 00:41:29.286463 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:41:29.286481 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:41:29.286497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:41:29.286514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:41:29.286531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:41:29.286565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:41:29.286582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:41:29.286596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:41:29.286607 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-04 00:41:29.286619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:41:29.286630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:41:29.286641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:41:29.286652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:41:29.286663 | orchestrator | 2026-04-04 00:41:29.286674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286685 | orchestrator | Saturday 04 April 2026 00:41:23 +0000 (0:00:00.357) 0:00:01.133 ******** 2026-04-04 00:41:29.286696 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286708 | orchestrator | 2026-04-04 00:41:29.286719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286730 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.339) 0:00:01.473 ******** 2026-04-04 00:41:29.286741 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286752 | orchestrator | 2026-04-04 00:41:29.286762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286773 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.165) 0:00:01.638 ******** 2026-04-04 00:41:29.286811 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286822 | orchestrator | 2026-04-04 00:41:29.286833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286844 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.162) 0:00:01.800 ******** 2026-04-04 00:41:29.286853 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286863 | orchestrator | 2026-04-04 00:41:29.286872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286881 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.170) 0:00:01.971 ******** 2026-04-04 00:41:29.286891 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286900 | orchestrator | 2026-04-04 00:41:29.286910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286919 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.183) 0:00:02.154 ******** 2026-04-04 00:41:29.286929 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286938 | orchestrator | 2026-04-04 00:41:29.286947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286957 | orchestrator | Saturday 04 April 2026 00:41:24 +0000 (0:00:00.177) 0:00:02.332 ******** 2026-04-04 00:41:29.286966 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.286976 | orchestrator | 2026-04-04 00:41:29.286986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.286995 | orchestrator | Saturday 04 April 2026 00:41:25 +0000 (0:00:00.175) 0:00:02.508 ******** 2026-04-04 00:41:29.287005 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287023 | orchestrator | 2026-04-04 00:41:29.287033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.287042 | orchestrator | Saturday 04 April 2026 00:41:25 +0000 (0:00:00.166) 0:00:02.675 ******** 2026-04-04 00:41:29.287052 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae) 2026-04-04 00:41:29.287062 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae) 2026-04-04 00:41:29.287072 | orchestrator | 2026-04-04 00:41:29.287081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.287110 | orchestrator | Saturday 04 April 2026 00:41:25 +0000 (0:00:00.379) 0:00:03.054 ******** 2026-04-04 00:41:29.287120 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8) 2026-04-04 00:41:29.287130 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8) 2026-04-04 00:41:29.287139 | orchestrator | 2026-04-04 00:41:29.287149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.287159 | orchestrator | Saturday 04 April 2026 00:41:26 +0000 (0:00:00.374) 0:00:03.429 ******** 2026-04-04 00:41:29.287168 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737) 2026-04-04 00:41:29.287178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737) 2026-04-04 00:41:29.287188 | orchestrator | 2026-04-04 00:41:29.287197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.287207 | orchestrator | Saturday 04 April 2026 00:41:26 +0000 (0:00:00.517) 0:00:03.947 ******** 2026-04-04 00:41:29.287217 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9) 2026-04-04 00:41:29.287226 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9) 2026-04-04 00:41:29.287236 | orchestrator | 2026-04-04 00:41:29.287245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:29.287255 | orchestrator | Saturday 04 April 2026 00:41:27 +0000 (0:00:00.524) 0:00:04.471 ******** 2026-04-04 00:41:29.287264 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:41:29.287274 | orchestrator | 2026-04-04 00:41:29.287284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287293 | orchestrator | Saturday 04 April 2026 00:41:27 +0000 (0:00:00.557) 0:00:05.029 ******** 2026-04-04 00:41:29.287303 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-04 00:41:29.287313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-04 00:41:29.287322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-04 00:41:29.287332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-04 00:41:29.287341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-04 00:41:29.287351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-04 00:41:29.287360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-04 00:41:29.287370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-04 00:41:29.287379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-04 00:41:29.287388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-04 00:41:29.287398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-04 00:41:29.287407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-04 00:41:29.287423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-04 00:41:29.287432 | orchestrator | 2026-04-04 00:41:29.287442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287451 | orchestrator | Saturday 04 April 2026 00:41:27 +0000 (0:00:00.378) 0:00:05.408 ******** 2026-04-04 00:41:29.287461 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287470 | orchestrator | 2026-04-04 00:41:29.287480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287489 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:00.198) 0:00:05.607 ******** 2026-04-04 00:41:29.287499 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287508 | orchestrator | 2026-04-04 00:41:29.287525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287535 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:00.182) 0:00:05.789 ******** 2026-04-04 00:41:29.287544 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287553 | orchestrator | 2026-04-04 00:41:29.287563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287573 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:00.188) 0:00:05.978 ******** 2026-04-04 00:41:29.287582 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287592 | orchestrator | 2026-04-04 00:41:29.287601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287611 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:00.187) 0:00:06.165 ******** 2026-04-04 00:41:29.287620 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287630 | orchestrator | 2026-04-04 00:41:29.287639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287649 | orchestrator | Saturday 04 April 2026 00:41:28 +0000 (0:00:00.182) 0:00:06.348 ******** 2026-04-04 00:41:29.287658 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287668 | orchestrator | 2026-04-04 00:41:29.287678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:29.287687 | orchestrator | Saturday 04 April 2026 00:41:29 +0000 (0:00:00.184) 0:00:06.532 ******** 2026-04-04 00:41:29.287697 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:29.287706 | orchestrator | 2026-04-04 00:41:29.287721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.043603 | orchestrator | Saturday 04 April 2026 00:41:29 +0000 (0:00:00.169) 0:00:06.702 ******** 2026-04-04 00:41:37.043715 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.043732 | orchestrator | 2026-04-04 00:41:37.043746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.043758 | orchestrator | Saturday 04 April 2026 00:41:29 +0000 (0:00:00.181) 0:00:06.883 ******** 2026-04-04 00:41:37.043769 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-04 00:41:37.043833 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-04 00:41:37.043845 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-04 00:41:37.043856 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-04 00:41:37.043867 | orchestrator | 2026-04-04 00:41:37.043879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.043890 | orchestrator | Saturday 04 April 2026 00:41:30 +0000 (0:00:00.879) 0:00:07.763 ******** 2026-04-04 00:41:37.043901 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.043911 | orchestrator | 2026-04-04 00:41:37.043922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.043933 | orchestrator | Saturday 04 April 2026 00:41:30 +0000 (0:00:00.211) 0:00:07.974 ******** 2026-04-04 00:41:37.043944 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.043955 | orchestrator | 2026-04-04 00:41:37.043966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.044000 | orchestrator | Saturday 04 April 2026 00:41:30 +0000 (0:00:00.204) 0:00:08.179 ******** 2026-04-04 00:41:37.044012 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044022 | orchestrator | 2026-04-04 00:41:37.044033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:37.044044 | orchestrator | Saturday 04 April 2026 00:41:30 +0000 (0:00:00.184) 0:00:08.364 ******** 2026-04-04 00:41:37.044054 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044065 | orchestrator | 2026-04-04 00:41:37.044090 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:41:37.044101 | orchestrator | Saturday 04 April 2026 00:41:31 +0000 (0:00:00.188) 0:00:08.552 ******** 2026-04-04 00:41:37.044112 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044123 | orchestrator | 2026-04-04 00:41:37.044136 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:41:37.044149 | orchestrator | Saturday 04 April 2026 00:41:31 +0000 (0:00:00.117) 0:00:08.669 ******** 2026-04-04 00:41:37.044162 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f0c57fe1-7323-5f70-a575-22ad75776519'}}) 2026-04-04 00:41:37.044175 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e865913-a109-5f6b-9820-a5901c50a906'}}) 2026-04-04 00:41:37.044188 | orchestrator | 2026-04-04 00:41:37.044202 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:41:37.044223 | orchestrator | Saturday 04 April 2026 00:41:31 +0000 (0:00:00.181) 0:00:08.851 ******** 2026-04-04 00:41:37.044244 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'}) 2026-04-04 00:41:37.044264 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'}) 2026-04-04 00:41:37.044283 | orchestrator | 2026-04-04 00:41:37.044304 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:41:37.044324 | orchestrator | Saturday 04 April 2026 00:41:33 +0000 (0:00:01.913) 0:00:10.764 ******** 2026-04-04 00:41:37.044344 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.044366 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.044387 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044409 | orchestrator | 2026-04-04 00:41:37.044432 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:41:37.044453 | orchestrator | Saturday 04 April 2026 00:41:33 +0000 (0:00:00.189) 0:00:10.953 ******** 2026-04-04 00:41:37.044475 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'}) 2026-04-04 00:41:37.044490 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'}) 2026-04-04 00:41:37.044503 | orchestrator | 2026-04-04 00:41:37.044514 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:41:37.044525 | orchestrator | Saturday 04 April 2026 00:41:35 +0000 (0:00:01.534) 0:00:12.487 ******** 2026-04-04 00:41:37.044535 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.044546 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.044557 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044567 | orchestrator | 2026-04-04 00:41:37.044578 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:41:37.044600 | orchestrator | Saturday 04 April 2026 00:41:35 +0000 (0:00:00.192) 0:00:12.680 ******** 2026-04-04 00:41:37.044631 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044642 | orchestrator | 2026-04-04 00:41:37.044654 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:41:37.044664 | orchestrator | Saturday 04 April 2026 00:41:35 +0000 (0:00:00.126) 0:00:12.807 ******** 2026-04-04 00:41:37.044675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.044686 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.044696 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044707 | orchestrator | 2026-04-04 00:41:37.044718 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:41:37.044729 | orchestrator | Saturday 04 April 2026 00:41:35 +0000 (0:00:00.378) 0:00:13.185 ******** 2026-04-04 00:41:37.044740 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044750 | orchestrator | 2026-04-04 00:41:37.044761 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:41:37.044771 | orchestrator | Saturday 04 April 2026 00:41:35 +0000 (0:00:00.138) 0:00:13.323 ******** 2026-04-04 00:41:37.044807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.044819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.044830 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044840 | orchestrator | 2026-04-04 00:41:37.044851 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:41:37.044862 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.143) 0:00:13.467 ******** 2026-04-04 00:41:37.044873 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044883 | orchestrator | 2026-04-04 00:41:37.044895 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:41:37.044915 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.123) 0:00:13.591 ******** 2026-04-04 00:41:37.044927 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.044938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.044949 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.044959 | orchestrator | 2026-04-04 00:41:37.044970 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:41:37.044981 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.143) 0:00:13.734 ******** 2026-04-04 00:41:37.044991 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:37.045002 | orchestrator | 2026-04-04 00:41:37.045013 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:41:37.045024 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.127) 0:00:13.862 ******** 2026-04-04 00:41:37.045035 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.045045 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.045056 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.045067 | orchestrator | 2026-04-04 00:41:37.045078 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:41:37.045096 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.138) 0:00:14.001 ******** 2026-04-04 00:41:37.045107 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.045118 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.045129 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.045139 | orchestrator | 2026-04-04 00:41:37.045150 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:41:37.045161 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.173) 0:00:14.174 ******** 2026-04-04 00:41:37.045172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:37.045183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:37.045193 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.045204 | orchestrator | 2026-04-04 00:41:37.045215 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:41:37.045225 | orchestrator | Saturday 04 April 2026 00:41:36 +0000 (0:00:00.150) 0:00:14.324 ******** 2026-04-04 00:41:37.045247 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:37.045258 | orchestrator | 2026-04-04 00:41:37.045269 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:41:37.045287 | orchestrator | Saturday 04 April 2026 00:41:37 +0000 (0:00:00.135) 0:00:14.460 ******** 2026-04-04 00:41:42.937270 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937350 | orchestrator | 2026-04-04 00:41:42.937363 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:41:42.937372 | orchestrator | Saturday 04 April 2026 00:41:37 +0000 (0:00:00.137) 0:00:14.597 ******** 2026-04-04 00:41:42.937381 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937389 | orchestrator | 2026-04-04 00:41:42.937397 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:41:42.937405 | orchestrator | Saturday 04 April 2026 00:41:37 +0000 (0:00:00.158) 0:00:14.756 ******** 2026-04-04 00:41:42.937413 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:42.937422 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:41:42.937430 | orchestrator | } 2026-04-04 00:41:42.937438 | orchestrator | 2026-04-04 00:41:42.937457 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:41:42.937465 | orchestrator | Saturday 04 April 2026 00:41:37 +0000 (0:00:00.433) 0:00:15.190 ******** 2026-04-04 00:41:42.937473 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:42.937481 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:41:42.937489 | orchestrator | } 2026-04-04 00:41:42.937497 | orchestrator | 2026-04-04 00:41:42.937505 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:41:42.937513 | orchestrator | Saturday 04 April 2026 00:41:37 +0000 (0:00:00.144) 0:00:15.335 ******** 2026-04-04 00:41:42.937521 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:42.937529 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:41:42.937536 | orchestrator | } 2026-04-04 00:41:42.937544 | orchestrator | 2026-04-04 00:41:42.937552 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:41:42.937560 | orchestrator | Saturday 04 April 2026 00:41:38 +0000 (0:00:00.127) 0:00:15.462 ******** 2026-04-04 00:41:42.937568 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:42.937576 | orchestrator | 2026-04-04 00:41:42.937596 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:41:42.937604 | orchestrator | Saturday 04 April 2026 00:41:38 +0000 (0:00:00.644) 0:00:16.107 ******** 2026-04-04 00:41:42.937630 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:42.937638 | orchestrator | 2026-04-04 00:41:42.937646 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:41:42.937654 | orchestrator | Saturday 04 April 2026 00:41:39 +0000 (0:00:00.487) 0:00:16.595 ******** 2026-04-04 00:41:42.937661 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:42.937669 | orchestrator | 2026-04-04 00:41:42.937677 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:41:42.937685 | orchestrator | Saturday 04 April 2026 00:41:39 +0000 (0:00:00.517) 0:00:17.113 ******** 2026-04-04 00:41:42.937693 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:42.937700 | orchestrator | 2026-04-04 00:41:42.937708 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:41:42.937716 | orchestrator | Saturday 04 April 2026 00:41:39 +0000 (0:00:00.143) 0:00:17.256 ******** 2026-04-04 00:41:42.937724 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937731 | orchestrator | 2026-04-04 00:41:42.937739 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:41:42.937747 | orchestrator | Saturday 04 April 2026 00:41:39 +0000 (0:00:00.103) 0:00:17.359 ******** 2026-04-04 00:41:42.937755 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937762 | orchestrator | 2026-04-04 00:41:42.937770 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:41:42.937812 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.124) 0:00:17.484 ******** 2026-04-04 00:41:42.937820 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:42.937828 | orchestrator |  "vgs_report": { 2026-04-04 00:41:42.937838 | orchestrator |  "vg": [] 2026-04-04 00:41:42.937847 | orchestrator |  } 2026-04-04 00:41:42.937856 | orchestrator | } 2026-04-04 00:41:42.937865 | orchestrator | 2026-04-04 00:41:42.937874 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:41:42.937883 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.141) 0:00:17.625 ******** 2026-04-04 00:41:42.937893 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937902 | orchestrator | 2026-04-04 00:41:42.937911 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:41:42.937921 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.118) 0:00:17.744 ******** 2026-04-04 00:41:42.937930 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937939 | orchestrator | 2026-04-04 00:41:42.937949 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:41:42.937958 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.129) 0:00:17.873 ******** 2026-04-04 00:41:42.937967 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.937976 | orchestrator | 2026-04-04 00:41:42.937985 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:41:42.937994 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.260) 0:00:18.133 ******** 2026-04-04 00:41:42.938003 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938012 | orchestrator | 2026-04-04 00:41:42.938063 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:41:42.938073 | orchestrator | Saturday 04 April 2026 00:41:40 +0000 (0:00:00.117) 0:00:18.251 ******** 2026-04-04 00:41:42.938082 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938092 | orchestrator | 2026-04-04 00:41:42.938102 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:41:42.938111 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.190) 0:00:18.442 ******** 2026-04-04 00:41:42.938119 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938126 | orchestrator | 2026-04-04 00:41:42.938134 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:41:42.938142 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.139) 0:00:18.581 ******** 2026-04-04 00:41:42.938150 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938164 | orchestrator | 2026-04-04 00:41:42.938172 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:41:42.938180 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.124) 0:00:18.705 ******** 2026-04-04 00:41:42.938202 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938210 | orchestrator | 2026-04-04 00:41:42.938218 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:41:42.938226 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.137) 0:00:18.842 ******** 2026-04-04 00:41:42.938234 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938242 | orchestrator | 2026-04-04 00:41:42.938250 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:41:42.938258 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.143) 0:00:18.986 ******** 2026-04-04 00:41:42.938265 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938273 | orchestrator | 2026-04-04 00:41:42.938281 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:41:42.938289 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.134) 0:00:19.121 ******** 2026-04-04 00:41:42.938297 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938305 | orchestrator | 2026-04-04 00:41:42.938312 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:41:42.938320 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.102) 0:00:19.223 ******** 2026-04-04 00:41:42.938328 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938336 | orchestrator | 2026-04-04 00:41:42.938344 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:41:42.938352 | orchestrator | Saturday 04 April 2026 00:41:41 +0000 (0:00:00.101) 0:00:19.325 ******** 2026-04-04 00:41:42.938359 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938367 | orchestrator | 2026-04-04 00:41:42.938375 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:41:42.938383 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.115) 0:00:19.440 ******** 2026-04-04 00:41:42.938391 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938399 | orchestrator | 2026-04-04 00:41:42.938411 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:41:42.938419 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.110) 0:00:19.551 ******** 2026-04-04 00:41:42.938428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:42.938437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:42.938445 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938452 | orchestrator | 2026-04-04 00:41:42.938460 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:41:42.938468 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.115) 0:00:19.667 ******** 2026-04-04 00:41:42.938476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:42.938484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:42.938492 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938500 | orchestrator | 2026-04-04 00:41:42.938508 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:41:42.938516 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.260) 0:00:19.928 ******** 2026-04-04 00:41:42.938524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:42.938531 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:42.938545 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938553 | orchestrator | 2026-04-04 00:41:42.938561 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:41:42.938569 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.125) 0:00:20.053 ******** 2026-04-04 00:41:42.938576 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:42.938584 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:42.938592 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938600 | orchestrator | 2026-04-04 00:41:42.938608 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:41:42.938616 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.133) 0:00:20.186 ******** 2026-04-04 00:41:42.938623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:42.938631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:42.938639 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:42.938647 | orchestrator | 2026-04-04 00:41:42.938655 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:41:42.938663 | orchestrator | Saturday 04 April 2026 00:41:42 +0000 (0:00:00.123) 0:00:20.310 ******** 2026-04-04 00:41:42.938676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661575 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661583 | orchestrator | 2026-04-04 00:41:47.661589 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:41:47.661594 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.127) 0:00:20.438 ******** 2026-04-04 00:41:47.661599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661615 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661619 | orchestrator | 2026-04-04 00:41:47.661624 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:41:47.661628 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.122) 0:00:20.560 ******** 2026-04-04 00:41:47.661632 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661636 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661648 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661652 | orchestrator | 2026-04-04 00:41:47.661656 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:41:47.661660 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.142) 0:00:20.702 ******** 2026-04-04 00:41:47.661664 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:47.661669 | orchestrator | 2026-04-04 00:41:47.661695 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:41:47.661699 | orchestrator | Saturday 04 April 2026 00:41:43 +0000 (0:00:00.456) 0:00:21.159 ******** 2026-04-04 00:41:47.661703 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:47.661707 | orchestrator | 2026-04-04 00:41:47.661711 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:41:47.661735 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.542) 0:00:21.701 ******** 2026-04-04 00:41:47.661740 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:41:47.661744 | orchestrator | 2026-04-04 00:41:47.661748 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:41:47.661752 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.127) 0:00:21.829 ******** 2026-04-04 00:41:47.661756 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'vg_name': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'}) 2026-04-04 00:41:47.661761 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'vg_name': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'}) 2026-04-04 00:41:47.661822 | orchestrator | 2026-04-04 00:41:47.661828 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:41:47.661832 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.150) 0:00:21.980 ******** 2026-04-04 00:41:47.661836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661840 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661844 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661847 | orchestrator | 2026-04-04 00:41:47.661860 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:41:47.661863 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.150) 0:00:22.130 ******** 2026-04-04 00:41:47.661867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661875 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661885 | orchestrator | 2026-04-04 00:41:47.661889 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:41:47.661893 | orchestrator | Saturday 04 April 2026 00:41:44 +0000 (0:00:00.246) 0:00:22.377 ******** 2026-04-04 00:41:47.661897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'})  2026-04-04 00:41:47.661901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'})  2026-04-04 00:41:47.661905 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:41:47.661908 | orchestrator | 2026-04-04 00:41:47.661912 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:41:47.661916 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.168) 0:00:22.546 ******** 2026-04-04 00:41:47.661931 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 00:41:47.661935 | orchestrator |  "lvm_report": { 2026-04-04 00:41:47.661939 | orchestrator |  "lv": [ 2026-04-04 00:41:47.661943 | orchestrator |  { 2026-04-04 00:41:47.661947 | orchestrator |  "lv_name": "osd-block-1e865913-a109-5f6b-9820-a5901c50a906", 2026-04-04 00:41:47.661951 | orchestrator |  "vg_name": "ceph-1e865913-a109-5f6b-9820-a5901c50a906" 2026-04-04 00:41:47.661955 | orchestrator |  }, 2026-04-04 00:41:47.661971 | orchestrator |  { 2026-04-04 00:41:47.661982 | orchestrator |  "lv_name": "osd-block-f0c57fe1-7323-5f70-a575-22ad75776519", 2026-04-04 00:41:47.661986 | orchestrator |  "vg_name": "ceph-f0c57fe1-7323-5f70-a575-22ad75776519" 2026-04-04 00:41:47.661990 | orchestrator |  } 2026-04-04 00:41:47.661994 | orchestrator |  ], 2026-04-04 00:41:47.661998 | orchestrator |  "pv": [ 2026-04-04 00:41:47.662002 | orchestrator |  { 2026-04-04 00:41:47.662006 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:41:47.662010 | orchestrator |  "vg_name": "ceph-f0c57fe1-7323-5f70-a575-22ad75776519" 2026-04-04 00:41:47.662054 | orchestrator |  }, 2026-04-04 00:41:47.662059 | orchestrator |  { 2026-04-04 00:41:47.662064 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:41:47.662068 | orchestrator |  "vg_name": "ceph-1e865913-a109-5f6b-9820-a5901c50a906" 2026-04-04 00:41:47.662081 | orchestrator |  } 2026-04-04 00:41:47.662085 | orchestrator |  ] 2026-04-04 00:41:47.662090 | orchestrator |  } 2026-04-04 00:41:47.662095 | orchestrator | } 2026-04-04 00:41:47.662099 | orchestrator | 2026-04-04 00:41:47.662104 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:41:47.662108 | orchestrator | 2026-04-04 00:41:47.662113 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:41:47.662128 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.255) 0:00:22.801 ******** 2026-04-04 00:41:47.662133 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-04 00:41:47.662138 | orchestrator | 2026-04-04 00:41:47.662142 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:41:47.662147 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.246) 0:00:23.048 ******** 2026-04-04 00:41:47.662151 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:41:47.662155 | orchestrator | 2026-04-04 00:41:47.662168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662173 | orchestrator | Saturday 04 April 2026 00:41:45 +0000 (0:00:00.223) 0:00:23.271 ******** 2026-04-04 00:41:47.662178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:41:47.662182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:41:47.662186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:41:47.662190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:41:47.662195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:41:47.662199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:41:47.662203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:41:47.662208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:41:47.662212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-04 00:41:47.662216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:41:47.662221 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:41:47.662225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:41:47.662230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:41:47.662240 | orchestrator | 2026-04-04 00:41:47.662245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662250 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.420) 0:00:23.692 ******** 2026-04-04 00:41:47.662254 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662264 | orchestrator | 2026-04-04 00:41:47.662269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662274 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.191) 0:00:23.883 ******** 2026-04-04 00:41:47.662278 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662283 | orchestrator | 2026-04-04 00:41:47.662287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662291 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.203) 0:00:24.087 ******** 2026-04-04 00:41:47.662294 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662298 | orchestrator | 2026-04-04 00:41:47.662302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662305 | orchestrator | Saturday 04 April 2026 00:41:46 +0000 (0:00:00.183) 0:00:24.270 ******** 2026-04-04 00:41:47.662309 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662313 | orchestrator | 2026-04-04 00:41:47.662317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662321 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.427) 0:00:24.698 ******** 2026-04-04 00:41:47.662324 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662328 | orchestrator | 2026-04-04 00:41:47.662332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:47.662336 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.180) 0:00:24.879 ******** 2026-04-04 00:41:47.662339 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:47.662343 | orchestrator | 2026-04-04 00:41:47.662351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208265 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.199) 0:00:25.078 ******** 2026-04-04 00:41:57.208342 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208350 | orchestrator | 2026-04-04 00:41:57.208356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208362 | orchestrator | Saturday 04 April 2026 00:41:47 +0000 (0:00:00.248) 0:00:25.327 ******** 2026-04-04 00:41:57.208366 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208371 | orchestrator | 2026-04-04 00:41:57.208376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208381 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.176) 0:00:25.503 ******** 2026-04-04 00:41:57.208386 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca) 2026-04-04 00:41:57.208391 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca) 2026-04-04 00:41:57.208396 | orchestrator | 2026-04-04 00:41:57.208400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208405 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.450) 0:00:25.954 ******** 2026-04-04 00:41:57.208409 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55) 2026-04-04 00:41:57.208414 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55) 2026-04-04 00:41:57.208418 | orchestrator | 2026-04-04 00:41:57.208423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208427 | orchestrator | Saturday 04 April 2026 00:41:48 +0000 (0:00:00.375) 0:00:26.330 ******** 2026-04-04 00:41:57.208432 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159) 2026-04-04 00:41:57.208436 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159) 2026-04-04 00:41:57.208441 | orchestrator | 2026-04-04 00:41:57.208445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208450 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.389) 0:00:26.719 ******** 2026-04-04 00:41:57.208454 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8) 2026-04-04 00:41:57.208474 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8) 2026-04-04 00:41:57.208479 | orchestrator | 2026-04-04 00:41:57.208483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:41:57.208487 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.382) 0:00:27.101 ******** 2026-04-04 00:41:57.208492 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:41:57.208496 | orchestrator | 2026-04-04 00:41:57.208501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208505 | orchestrator | Saturday 04 April 2026 00:41:49 +0000 (0:00:00.291) 0:00:27.393 ******** 2026-04-04 00:41:57.208509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-04 00:41:57.208515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-04 00:41:57.208519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-04 00:41:57.208523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-04 00:41:57.208528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-04 00:41:57.208532 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-04 00:41:57.208536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-04 00:41:57.208541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-04 00:41:57.208545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-04 00:41:57.208550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-04 00:41:57.208554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-04 00:41:57.208558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-04 00:41:57.208563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-04 00:41:57.208567 | orchestrator | 2026-04-04 00:41:57.208571 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208576 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.468) 0:00:27.861 ******** 2026-04-04 00:41:57.208580 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208584 | orchestrator | 2026-04-04 00:41:57.208589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208593 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.173) 0:00:28.035 ******** 2026-04-04 00:41:57.208598 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208602 | orchestrator | 2026-04-04 00:41:57.208606 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208611 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.173) 0:00:28.208 ******** 2026-04-04 00:41:57.208615 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208619 | orchestrator | 2026-04-04 00:41:57.208635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208640 | orchestrator | Saturday 04 April 2026 00:41:50 +0000 (0:00:00.170) 0:00:28.379 ******** 2026-04-04 00:41:57.208644 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208648 | orchestrator | 2026-04-04 00:41:57.208653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208657 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.184) 0:00:28.563 ******** 2026-04-04 00:41:57.208661 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208666 | orchestrator | 2026-04-04 00:41:57.208670 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208679 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.169) 0:00:28.733 ******** 2026-04-04 00:41:57.208683 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208687 | orchestrator | 2026-04-04 00:41:57.208692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208696 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.188) 0:00:28.922 ******** 2026-04-04 00:41:57.208701 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208705 | orchestrator | 2026-04-04 00:41:57.208710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208714 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.203) 0:00:29.125 ******** 2026-04-04 00:41:57.208730 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208735 | orchestrator | 2026-04-04 00:41:57.208739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208746 | orchestrator | Saturday 04 April 2026 00:41:51 +0000 (0:00:00.181) 0:00:29.307 ******** 2026-04-04 00:41:57.208751 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-04 00:41:57.208755 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-04 00:41:57.208786 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-04 00:41:57.208794 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-04 00:41:57.208801 | orchestrator | 2026-04-04 00:41:57.208810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208815 | orchestrator | Saturday 04 April 2026 00:41:52 +0000 (0:00:00.733) 0:00:30.040 ******** 2026-04-04 00:41:57.208819 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208823 | orchestrator | 2026-04-04 00:41:57.208827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208832 | orchestrator | Saturday 04 April 2026 00:41:52 +0000 (0:00:00.188) 0:00:30.229 ******** 2026-04-04 00:41:57.208836 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208840 | orchestrator | 2026-04-04 00:41:57.208844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208849 | orchestrator | Saturday 04 April 2026 00:41:52 +0000 (0:00:00.184) 0:00:30.413 ******** 2026-04-04 00:41:57.208853 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208857 | orchestrator | 2026-04-04 00:41:57.208862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:41:57.208866 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.480) 0:00:30.894 ******** 2026-04-04 00:41:57.208870 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208874 | orchestrator | 2026-04-04 00:41:57.208881 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:41:57.208888 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.194) 0:00:31.088 ******** 2026-04-04 00:41:57.208894 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208900 | orchestrator | 2026-04-04 00:41:57.208908 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:41:57.208914 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.122) 0:00:31.211 ******** 2026-04-04 00:41:57.208921 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}}) 2026-04-04 00:41:57.208929 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}}) 2026-04-04 00:41:57.208936 | orchestrator | 2026-04-04 00:41:57.208943 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:41:57.208947 | orchestrator | Saturday 04 April 2026 00:41:53 +0000 (0:00:00.165) 0:00:31.376 ******** 2026-04-04 00:41:57.208954 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}) 2026-04-04 00:41:57.208960 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}) 2026-04-04 00:41:57.208968 | orchestrator | 2026-04-04 00:41:57.208973 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:41:57.208977 | orchestrator | Saturday 04 April 2026 00:41:55 +0000 (0:00:01.819) 0:00:33.196 ******** 2026-04-04 00:41:57.208981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:41:57.208987 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:41:57.208991 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:41:57.208995 | orchestrator | 2026-04-04 00:41:57.209000 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:41:57.209004 | orchestrator | Saturday 04 April 2026 00:41:55 +0000 (0:00:00.152) 0:00:33.349 ******** 2026-04-04 00:41:57.209008 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}) 2026-04-04 00:41:57.209017 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}) 2026-04-04 00:42:02.415967 | orchestrator | 2026-04-04 00:42:02.416062 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:42:02.416074 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:01.361) 0:00:34.710 ******** 2026-04-04 00:42:02.416082 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416101 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416110 | orchestrator | 2026-04-04 00:42:02.416118 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:42:02.416123 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.151) 0:00:34.861 ******** 2026-04-04 00:42:02.416127 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416131 | orchestrator | 2026-04-04 00:42:02.416135 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:42:02.416139 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.173) 0:00:35.035 ******** 2026-04-04 00:42:02.416153 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416161 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416165 | orchestrator | 2026-04-04 00:42:02.416169 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:42:02.416175 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.154) 0:00:35.190 ******** 2026-04-04 00:42:02.416181 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416188 | orchestrator | 2026-04-04 00:42:02.416195 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:42:02.416200 | orchestrator | Saturday 04 April 2026 00:41:57 +0000 (0:00:00.129) 0:00:35.319 ******** 2026-04-04 00:42:02.416204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416227 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416231 | orchestrator | 2026-04-04 00:42:02.416235 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:42:02.416241 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.165) 0:00:35.484 ******** 2026-04-04 00:42:02.416247 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416255 | orchestrator | 2026-04-04 00:42:02.416261 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:42:02.416268 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.344) 0:00:35.828 ******** 2026-04-04 00:42:02.416275 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416289 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416296 | orchestrator | 2026-04-04 00:42:02.416300 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:42:02.416304 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.173) 0:00:36.002 ******** 2026-04-04 00:42:02.416309 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:02.416317 | orchestrator | 2026-04-04 00:42:02.416323 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:42:02.416330 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.139) 0:00:36.142 ******** 2026-04-04 00:42:02.416336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416340 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416344 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416347 | orchestrator | 2026-04-04 00:42:02.416351 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:42:02.416355 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.143) 0:00:36.285 ******** 2026-04-04 00:42:02.416359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416366 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416370 | orchestrator | 2026-04-04 00:42:02.416374 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:42:02.416389 | orchestrator | Saturday 04 April 2026 00:41:58 +0000 (0:00:00.129) 0:00:36.415 ******** 2026-04-04 00:42:02.416393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:02.416397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:02.416401 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416404 | orchestrator | 2026-04-04 00:42:02.416408 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:42:02.416412 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.134) 0:00:36.549 ******** 2026-04-04 00:42:02.416416 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416419 | orchestrator | 2026-04-04 00:42:02.416423 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:42:02.416427 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.104) 0:00:36.654 ******** 2026-04-04 00:42:02.416436 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416440 | orchestrator | 2026-04-04 00:42:02.416443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:42:02.416450 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.119) 0:00:36.773 ******** 2026-04-04 00:42:02.416454 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416458 | orchestrator | 2026-04-04 00:42:02.416462 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:42:02.416465 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.121) 0:00:36.895 ******** 2026-04-04 00:42:02.416469 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:02.416473 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:42:02.416477 | orchestrator | } 2026-04-04 00:42:02.416481 | orchestrator | 2026-04-04 00:42:02.416485 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:42:02.416489 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.129) 0:00:37.025 ******** 2026-04-04 00:42:02.416492 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:02.416496 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:42:02.416500 | orchestrator | } 2026-04-04 00:42:02.416504 | orchestrator | 2026-04-04 00:42:02.416508 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:42:02.416512 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.170) 0:00:37.195 ******** 2026-04-04 00:42:02.416516 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:02.416521 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:42:02.416526 | orchestrator | } 2026-04-04 00:42:02.416530 | orchestrator | 2026-04-04 00:42:02.416535 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:42:02.416539 | orchestrator | Saturday 04 April 2026 00:41:59 +0000 (0:00:00.136) 0:00:37.331 ******** 2026-04-04 00:42:02.416544 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:02.416548 | orchestrator | 2026-04-04 00:42:02.416553 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:42:02.416557 | orchestrator | Saturday 04 April 2026 00:42:00 +0000 (0:00:00.607) 0:00:37.939 ******** 2026-04-04 00:42:02.416562 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:02.416566 | orchestrator | 2026-04-04 00:42:02.416571 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:42:02.416575 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.513) 0:00:38.453 ******** 2026-04-04 00:42:02.416580 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:02.416584 | orchestrator | 2026-04-04 00:42:02.416589 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:42:02.416594 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.492) 0:00:38.946 ******** 2026-04-04 00:42:02.416598 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:02.416603 | orchestrator | 2026-04-04 00:42:02.416607 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:42:02.416612 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.122) 0:00:39.069 ******** 2026-04-04 00:42:02.416616 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416621 | orchestrator | 2026-04-04 00:42:02.416626 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:42:02.416630 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.089) 0:00:39.158 ******** 2026-04-04 00:42:02.416635 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416639 | orchestrator | 2026-04-04 00:42:02.416644 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:42:02.416648 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.095) 0:00:39.254 ******** 2026-04-04 00:42:02.416654 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:02.416661 | orchestrator |  "vgs_report": { 2026-04-04 00:42:02.416669 | orchestrator |  "vg": [] 2026-04-04 00:42:02.416675 | orchestrator |  } 2026-04-04 00:42:02.416683 | orchestrator | } 2026-04-04 00:42:02.416695 | orchestrator | 2026-04-04 00:42:02.416701 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:42:02.416706 | orchestrator | Saturday 04 April 2026 00:42:01 +0000 (0:00:00.115) 0:00:39.369 ******** 2026-04-04 00:42:02.416712 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416719 | orchestrator | 2026-04-04 00:42:02.416726 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:42:02.416732 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.115) 0:00:39.484 ******** 2026-04-04 00:42:02.416739 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416743 | orchestrator | 2026-04-04 00:42:02.416747 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:42:02.416751 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.111) 0:00:39.596 ******** 2026-04-04 00:42:02.416772 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416776 | orchestrator | 2026-04-04 00:42:02.416780 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:42:02.416784 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.124) 0:00:39.721 ******** 2026-04-04 00:42:02.416788 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:02.416791 | orchestrator | 2026-04-04 00:42:02.416799 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:42:06.832071 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.112) 0:00:39.834 ******** 2026-04-04 00:42:06.832151 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832160 | orchestrator | 2026-04-04 00:42:06.832167 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:42:06.832172 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.120) 0:00:39.955 ******** 2026-04-04 00:42:06.832177 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832182 | orchestrator | 2026-04-04 00:42:06.832187 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:42:06.832192 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.265) 0:00:40.220 ******** 2026-04-04 00:42:06.832197 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832201 | orchestrator | 2026-04-04 00:42:06.832206 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:42:06.832211 | orchestrator | Saturday 04 April 2026 00:42:02 +0000 (0:00:00.120) 0:00:40.340 ******** 2026-04-04 00:42:06.832215 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832220 | orchestrator | 2026-04-04 00:42:06.832224 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:42:06.832229 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.122) 0:00:40.463 ******** 2026-04-04 00:42:06.832234 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832238 | orchestrator | 2026-04-04 00:42:06.832243 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:42:06.832248 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.131) 0:00:40.595 ******** 2026-04-04 00:42:06.832252 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832257 | orchestrator | 2026-04-04 00:42:06.832262 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:42:06.832267 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.126) 0:00:40.722 ******** 2026-04-04 00:42:06.832271 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832276 | orchestrator | 2026-04-04 00:42:06.832303 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:42:06.832308 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.127) 0:00:40.849 ******** 2026-04-04 00:42:06.832313 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832318 | orchestrator | 2026-04-04 00:42:06.832323 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:42:06.832327 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.133) 0:00:40.982 ******** 2026-04-04 00:42:06.832332 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832350 | orchestrator | 2026-04-04 00:42:06.832355 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:42:06.832360 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.127) 0:00:41.110 ******** 2026-04-04 00:42:06.832364 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832369 | orchestrator | 2026-04-04 00:42:06.832374 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:42:06.832378 | orchestrator | Saturday 04 April 2026 00:42:03 +0000 (0:00:00.152) 0:00:41.262 ******** 2026-04-04 00:42:06.832384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832390 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832395 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832399 | orchestrator | 2026-04-04 00:42:06.832404 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:42:06.832409 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:00.199) 0:00:41.461 ******** 2026-04-04 00:42:06.832413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832422 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832427 | orchestrator | 2026-04-04 00:42:06.832431 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:42:06.832436 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:00.154) 0:00:41.616 ******** 2026-04-04 00:42:06.832440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832450 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832454 | orchestrator | 2026-04-04 00:42:06.832458 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:42:06.832463 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:00.165) 0:00:41.782 ******** 2026-04-04 00:42:06.832468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832477 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832482 | orchestrator | 2026-04-04 00:42:06.832497 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:42:06.832501 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:00.324) 0:00:42.107 ******** 2026-04-04 00:42:06.832506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832511 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832515 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832520 | orchestrator | 2026-04-04 00:42:06.832524 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:42:06.832529 | orchestrator | Saturday 04 April 2026 00:42:04 +0000 (0:00:00.162) 0:00:42.269 ******** 2026-04-04 00:42:06.832537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832550 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832555 | orchestrator | 2026-04-04 00:42:06.832559 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:42:06.832564 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.156) 0:00:42.426 ******** 2026-04-04 00:42:06.832568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832577 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832582 | orchestrator | 2026-04-04 00:42:06.832587 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:42:06.832591 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.140) 0:00:42.566 ******** 2026-04-04 00:42:06.832596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832600 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832605 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832609 | orchestrator | 2026-04-04 00:42:06.832614 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:42:06.832619 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.137) 0:00:42.703 ******** 2026-04-04 00:42:06.832623 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:06.832628 | orchestrator | 2026-04-04 00:42:06.832633 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:42:06.832638 | orchestrator | Saturday 04 April 2026 00:42:05 +0000 (0:00:00.498) 0:00:43.202 ******** 2026-04-04 00:42:06.832643 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:06.832649 | orchestrator | 2026-04-04 00:42:06.832654 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:42:06.832659 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.550) 0:00:43.753 ******** 2026-04-04 00:42:06.832665 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:06.832670 | orchestrator | 2026-04-04 00:42:06.832675 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:42:06.832680 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.131) 0:00:43.885 ******** 2026-04-04 00:42:06.832686 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'vg_name': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}) 2026-04-04 00:42:06.832693 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'vg_name': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}) 2026-04-04 00:42:06.832698 | orchestrator | 2026-04-04 00:42:06.832704 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:42:06.832709 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.162) 0:00:44.048 ******** 2026-04-04 00:42:06.832714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:06.832726 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:06.832735 | orchestrator | 2026-04-04 00:42:06.832740 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:42:06.832746 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.139) 0:00:44.188 ******** 2026-04-04 00:42:06.832792 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:06.832803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:12.138160 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:12.138254 | orchestrator | 2026-04-04 00:42:12.138267 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:42:12.138276 | orchestrator | Saturday 04 April 2026 00:42:06 +0000 (0:00:00.131) 0:00:44.320 ******** 2026-04-04 00:42:12.138283 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'})  2026-04-04 00:42:12.138292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'})  2026-04-04 00:42:12.138299 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:12.138306 | orchestrator | 2026-04-04 00:42:12.138313 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:42:12.138320 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.132) 0:00:44.452 ******** 2026-04-04 00:42:12.138327 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 00:42:12.138334 | orchestrator |  "lvm_report": { 2026-04-04 00:42:12.138342 | orchestrator |  "lv": [ 2026-04-04 00:42:12.138362 | orchestrator |  { 2026-04-04 00:42:12.138369 | orchestrator |  "lv_name": "osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f", 2026-04-04 00:42:12.138377 | orchestrator |  "vg_name": "ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f" 2026-04-04 00:42:12.138383 | orchestrator |  }, 2026-04-04 00:42:12.138390 | orchestrator |  { 2026-04-04 00:42:12.138397 | orchestrator |  "lv_name": "osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa", 2026-04-04 00:42:12.138403 | orchestrator |  "vg_name": "ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa" 2026-04-04 00:42:12.138410 | orchestrator |  } 2026-04-04 00:42:12.138417 | orchestrator |  ], 2026-04-04 00:42:12.138424 | orchestrator |  "pv": [ 2026-04-04 00:42:12.138430 | orchestrator |  { 2026-04-04 00:42:12.138437 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:42:12.138444 | orchestrator |  "vg_name": "ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f" 2026-04-04 00:42:12.138450 | orchestrator |  }, 2026-04-04 00:42:12.138457 | orchestrator |  { 2026-04-04 00:42:12.138464 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:42:12.138471 | orchestrator |  "vg_name": "ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa" 2026-04-04 00:42:12.138478 | orchestrator |  } 2026-04-04 00:42:12.138485 | orchestrator |  ] 2026-04-04 00:42:12.138492 | orchestrator |  } 2026-04-04 00:42:12.138498 | orchestrator | } 2026-04-04 00:42:12.138505 | orchestrator | 2026-04-04 00:42:12.138512 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-04 00:42:12.138518 | orchestrator | 2026-04-04 00:42:12.138525 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 00:42:12.138532 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.387) 0:00:44.840 ******** 2026-04-04 00:42:12.138538 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-04 00:42:12.138545 | orchestrator | 2026-04-04 00:42:12.138552 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-04 00:42:12.138558 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.226) 0:00:45.066 ******** 2026-04-04 00:42:12.138582 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:12.138589 | orchestrator | 2026-04-04 00:42:12.138595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138602 | orchestrator | Saturday 04 April 2026 00:42:07 +0000 (0:00:00.218) 0:00:45.285 ******** 2026-04-04 00:42:12.138608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:42:12.138615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:42:12.138621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:42:12.138632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:42:12.138639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:42:12.138645 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:42:12.138652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:42:12.138658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:42:12.138666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-04 00:42:12.138673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:42:12.138681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:42:12.138689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:42:12.138697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:42:12.138704 | orchestrator | 2026-04-04 00:42:12.138712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138724 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.365) 0:00:45.650 ******** 2026-04-04 00:42:12.138735 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.138765 | orchestrator | 2026-04-04 00:42:12.138778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138789 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.177) 0:00:45.828 ******** 2026-04-04 00:42:12.138799 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.138810 | orchestrator | 2026-04-04 00:42:12.138822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138850 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.161) 0:00:45.991 ******** 2026-04-04 00:42:12.138861 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.138873 | orchestrator | 2026-04-04 00:42:12.138884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138895 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.171) 0:00:46.162 ******** 2026-04-04 00:42:12.138907 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.138919 | orchestrator | 2026-04-04 00:42:12.138930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138941 | orchestrator | Saturday 04 April 2026 00:42:08 +0000 (0:00:00.186) 0:00:46.348 ******** 2026-04-04 00:42:12.138952 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.138963 | orchestrator | 2026-04-04 00:42:12.138974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.138984 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.168) 0:00:46.517 ******** 2026-04-04 00:42:12.138996 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.139009 | orchestrator | 2026-04-04 00:42:12.139020 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139032 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.441) 0:00:46.959 ******** 2026-04-04 00:42:12.139039 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.139055 | orchestrator | 2026-04-04 00:42:12.139062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139069 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.180) 0:00:47.139 ******** 2026-04-04 00:42:12.139075 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:12.139082 | orchestrator | 2026-04-04 00:42:12.139088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139095 | orchestrator | Saturday 04 April 2026 00:42:09 +0000 (0:00:00.178) 0:00:47.317 ******** 2026-04-04 00:42:12.139101 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c) 2026-04-04 00:42:12.139109 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c) 2026-04-04 00:42:12.139116 | orchestrator | 2026-04-04 00:42:12.139122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139129 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.418) 0:00:47.736 ******** 2026-04-04 00:42:12.139135 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae) 2026-04-04 00:42:12.139142 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae) 2026-04-04 00:42:12.139148 | orchestrator | 2026-04-04 00:42:12.139155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139161 | orchestrator | Saturday 04 April 2026 00:42:10 +0000 (0:00:00.407) 0:00:48.144 ******** 2026-04-04 00:42:12.139168 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905) 2026-04-04 00:42:12.139174 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905) 2026-04-04 00:42:12.139181 | orchestrator | 2026-04-04 00:42:12.139187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139194 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.397) 0:00:48.542 ******** 2026-04-04 00:42:12.139200 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5) 2026-04-04 00:42:12.139207 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5) 2026-04-04 00:42:12.139213 | orchestrator | 2026-04-04 00:42:12.139220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-04 00:42:12.139226 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.414) 0:00:48.957 ******** 2026-04-04 00:42:12.139233 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-04 00:42:12.139239 | orchestrator | 2026-04-04 00:42:12.139246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:12.139253 | orchestrator | Saturday 04 April 2026 00:42:11 +0000 (0:00:00.311) 0:00:49.268 ******** 2026-04-04 00:42:12.139259 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-04 00:42:12.139265 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-04 00:42:12.139272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-04 00:42:12.139278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-04 00:42:12.139284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-04 00:42:12.139291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-04 00:42:12.139329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-04 00:42:12.139337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-04 00:42:12.139343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-04 00:42:12.139354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-04 00:42:12.139361 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-04 00:42:12.139375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-04 00:42:20.704194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-04 00:42:20.704320 | orchestrator | 2026-04-04 00:42:20.704337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704349 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.368) 0:00:49.636 ******** 2026-04-04 00:42:20.704359 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704371 | orchestrator | 2026-04-04 00:42:20.704381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704397 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.161) 0:00:49.798 ******** 2026-04-04 00:42:20.704413 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704429 | orchestrator | 2026-04-04 00:42:20.704446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704461 | orchestrator | Saturday 04 April 2026 00:42:12 +0000 (0:00:00.178) 0:00:49.976 ******** 2026-04-04 00:42:20.704477 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704494 | orchestrator | 2026-04-04 00:42:20.704511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704547 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.457) 0:00:50.434 ******** 2026-04-04 00:42:20.704565 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704582 | orchestrator | 2026-04-04 00:42:20.704600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704618 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.173) 0:00:50.607 ******** 2026-04-04 00:42:20.704635 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704652 | orchestrator | 2026-04-04 00:42:20.704667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704684 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.179) 0:00:50.787 ******** 2026-04-04 00:42:20.704699 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704715 | orchestrator | 2026-04-04 00:42:20.704731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704778 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.224) 0:00:51.012 ******** 2026-04-04 00:42:20.704796 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704813 | orchestrator | 2026-04-04 00:42:20.704830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704846 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.189) 0:00:51.201 ******** 2026-04-04 00:42:20.704863 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.704880 | orchestrator | 2026-04-04 00:42:20.704896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.704912 | orchestrator | Saturday 04 April 2026 00:42:13 +0000 (0:00:00.215) 0:00:51.416 ******** 2026-04-04 00:42:20.704929 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-04 00:42:20.704947 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-04 00:42:20.704965 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-04 00:42:20.704982 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-04 00:42:20.704999 | orchestrator | 2026-04-04 00:42:20.705017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.705036 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.646) 0:00:52.062 ******** 2026-04-04 00:42:20.705054 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705068 | orchestrator | 2026-04-04 00:42:20.705079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.705117 | orchestrator | Saturday 04 April 2026 00:42:14 +0000 (0:00:00.180) 0:00:52.243 ******** 2026-04-04 00:42:20.705127 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705137 | orchestrator | 2026-04-04 00:42:20.705146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.705156 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.211) 0:00:52.455 ******** 2026-04-04 00:42:20.705166 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705175 | orchestrator | 2026-04-04 00:42:20.705185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-04 00:42:20.705195 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.193) 0:00:52.648 ******** 2026-04-04 00:42:20.705204 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705214 | orchestrator | 2026-04-04 00:42:20.705223 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-04 00:42:20.705233 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.202) 0:00:52.850 ******** 2026-04-04 00:42:20.705242 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705252 | orchestrator | 2026-04-04 00:42:20.705261 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-04 00:42:20.705271 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.345) 0:00:53.196 ******** 2026-04-04 00:42:20.705281 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '92575011-0645-5cdf-badf-43ad86ae8159'}}) 2026-04-04 00:42:20.705291 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '35995e13-d19e-546f-ae20-ff296f4077c7'}}) 2026-04-04 00:42:20.705300 | orchestrator | 2026-04-04 00:42:20.705310 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-04 00:42:20.705321 | orchestrator | Saturday 04 April 2026 00:42:15 +0000 (0:00:00.187) 0:00:53.383 ******** 2026-04-04 00:42:20.705332 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'}) 2026-04-04 00:42:20.705343 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'}) 2026-04-04 00:42:20.705353 | orchestrator | 2026-04-04 00:42:20.705362 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-04 00:42:20.705392 | orchestrator | Saturday 04 April 2026 00:42:17 +0000 (0:00:01.994) 0:00:55.377 ******** 2026-04-04 00:42:20.705403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:20.705414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:20.705424 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705433 | orchestrator | 2026-04-04 00:42:20.705443 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-04 00:42:20.705452 | orchestrator | Saturday 04 April 2026 00:42:18 +0000 (0:00:00.180) 0:00:55.558 ******** 2026-04-04 00:42:20.705462 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'}) 2026-04-04 00:42:20.705481 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'}) 2026-04-04 00:42:20.705491 | orchestrator | 2026-04-04 00:42:20.705500 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-04 00:42:20.705510 | orchestrator | Saturday 04 April 2026 00:42:19 +0000 (0:00:01.371) 0:00:56.929 ******** 2026-04-04 00:42:20.705520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:20.705537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:20.705547 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705556 | orchestrator | 2026-04-04 00:42:20.705565 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-04 00:42:20.705575 | orchestrator | Saturday 04 April 2026 00:42:19 +0000 (0:00:00.143) 0:00:57.073 ******** 2026-04-04 00:42:20.705584 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705594 | orchestrator | 2026-04-04 00:42:20.705603 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-04 00:42:20.705613 | orchestrator | Saturday 04 April 2026 00:42:19 +0000 (0:00:00.131) 0:00:57.205 ******** 2026-04-04 00:42:20.705622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:20.705632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:20.705641 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705651 | orchestrator | 2026-04-04 00:42:20.705660 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-04 00:42:20.705670 | orchestrator | Saturday 04 April 2026 00:42:19 +0000 (0:00:00.150) 0:00:57.355 ******** 2026-04-04 00:42:20.705679 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705689 | orchestrator | 2026-04-04 00:42:20.705698 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-04 00:42:20.705708 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.132) 0:00:57.488 ******** 2026-04-04 00:42:20.705717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:20.705727 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:20.705736 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705785 | orchestrator | 2026-04-04 00:42:20.705806 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-04 00:42:20.705823 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.143) 0:00:57.632 ******** 2026-04-04 00:42:20.705839 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705853 | orchestrator | 2026-04-04 00:42:20.705863 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-04 00:42:20.705880 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.156) 0:00:57.789 ******** 2026-04-04 00:42:20.705896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:20.705912 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:20.705928 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:20.705944 | orchestrator | 2026-04-04 00:42:20.705960 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-04 00:42:20.705976 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.144) 0:00:57.934 ******** 2026-04-04 00:42:20.705992 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:20.706008 | orchestrator | 2026-04-04 00:42:20.706105 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-04 00:42:20.706126 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.128) 0:00:58.062 ******** 2026-04-04 00:42:20.706169 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:26.375522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:26.375599 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375608 | orchestrator | 2026-04-04 00:42:26.375616 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-04 00:42:26.375623 | orchestrator | Saturday 04 April 2026 00:42:20 +0000 (0:00:00.331) 0:00:58.394 ******** 2026-04-04 00:42:26.375629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:26.375635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:26.375641 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375646 | orchestrator | 2026-04-04 00:42:26.375664 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-04 00:42:26.375670 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.145) 0:00:58.539 ******** 2026-04-04 00:42:26.375675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:26.375681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:26.375686 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375692 | orchestrator | 2026-04-04 00:42:26.375697 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-04 00:42:26.375703 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.145) 0:00:58.685 ******** 2026-04-04 00:42:26.375708 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375713 | orchestrator | 2026-04-04 00:42:26.375719 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-04 00:42:26.375724 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.145) 0:00:58.830 ******** 2026-04-04 00:42:26.375730 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375735 | orchestrator | 2026-04-04 00:42:26.375741 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-04 00:42:26.375791 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.114) 0:00:58.945 ******** 2026-04-04 00:42:26.375799 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.375805 | orchestrator | 2026-04-04 00:42:26.375810 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-04 00:42:26.375816 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.151) 0:00:59.097 ******** 2026-04-04 00:42:26.375822 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:26.375827 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-04 00:42:26.375833 | orchestrator | } 2026-04-04 00:42:26.375839 | orchestrator | 2026-04-04 00:42:26.375844 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-04 00:42:26.375850 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.128) 0:00:59.226 ******** 2026-04-04 00:42:26.375855 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:26.375861 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-04 00:42:26.375866 | orchestrator | } 2026-04-04 00:42:26.375872 | orchestrator | 2026-04-04 00:42:26.375877 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-04 00:42:26.375883 | orchestrator | Saturday 04 April 2026 00:42:21 +0000 (0:00:00.111) 0:00:59.337 ******** 2026-04-04 00:42:26.375888 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:26.375894 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-04 00:42:26.375899 | orchestrator | } 2026-04-04 00:42:26.375904 | orchestrator | 2026-04-04 00:42:26.375910 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-04 00:42:26.375915 | orchestrator | Saturday 04 April 2026 00:42:22 +0000 (0:00:00.125) 0:00:59.462 ******** 2026-04-04 00:42:26.375935 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:26.375941 | orchestrator | 2026-04-04 00:42:26.375946 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-04 00:42:26.375952 | orchestrator | Saturday 04 April 2026 00:42:22 +0000 (0:00:00.507) 0:00:59.970 ******** 2026-04-04 00:42:26.375957 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:26.375963 | orchestrator | 2026-04-04 00:42:26.375968 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-04 00:42:26.375973 | orchestrator | Saturday 04 April 2026 00:42:23 +0000 (0:00:00.531) 0:01:00.501 ******** 2026-04-04 00:42:26.375979 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:26.375984 | orchestrator | 2026-04-04 00:42:26.375989 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-04 00:42:26.375995 | orchestrator | Saturday 04 April 2026 00:42:23 +0000 (0:00:00.527) 0:01:01.029 ******** 2026-04-04 00:42:26.376000 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:26.376006 | orchestrator | 2026-04-04 00:42:26.376011 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-04 00:42:26.376016 | orchestrator | Saturday 04 April 2026 00:42:23 +0000 (0:00:00.243) 0:01:01.272 ******** 2026-04-04 00:42:26.376022 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376027 | orchestrator | 2026-04-04 00:42:26.376033 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-04 00:42:26.376038 | orchestrator | Saturday 04 April 2026 00:42:23 +0000 (0:00:00.094) 0:01:01.366 ******** 2026-04-04 00:42:26.376044 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376049 | orchestrator | 2026-04-04 00:42:26.376054 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-04 00:42:26.376059 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.084) 0:01:01.450 ******** 2026-04-04 00:42:26.376065 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:26.376071 | orchestrator |  "vgs_report": { 2026-04-04 00:42:26.376076 | orchestrator |  "vg": [] 2026-04-04 00:42:26.376093 | orchestrator |  } 2026-04-04 00:42:26.376101 | orchestrator | } 2026-04-04 00:42:26.376107 | orchestrator | 2026-04-04 00:42:26.376113 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-04 00:42:26.376121 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.114) 0:01:01.565 ******** 2026-04-04 00:42:26.376127 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376133 | orchestrator | 2026-04-04 00:42:26.376139 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-04 00:42:26.376145 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.120) 0:01:01.685 ******** 2026-04-04 00:42:26.376151 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376158 | orchestrator | 2026-04-04 00:42:26.376164 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-04 00:42:26.376171 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.110) 0:01:01.796 ******** 2026-04-04 00:42:26.376177 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376183 | orchestrator | 2026-04-04 00:42:26.376189 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-04 00:42:26.376195 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.110) 0:01:01.907 ******** 2026-04-04 00:42:26.376201 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376208 | orchestrator | 2026-04-04 00:42:26.376214 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-04 00:42:26.376220 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.104) 0:01:02.011 ******** 2026-04-04 00:42:26.376227 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376233 | orchestrator | 2026-04-04 00:42:26.376239 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-04 00:42:26.376246 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.113) 0:01:02.125 ******** 2026-04-04 00:42:26.376252 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376263 | orchestrator | 2026-04-04 00:42:26.376270 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-04 00:42:26.376275 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.133) 0:01:02.258 ******** 2026-04-04 00:42:26.376281 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376286 | orchestrator | 2026-04-04 00:42:26.376302 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-04 00:42:26.376308 | orchestrator | Saturday 04 April 2026 00:42:24 +0000 (0:00:00.119) 0:01:02.378 ******** 2026-04-04 00:42:26.376320 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376325 | orchestrator | 2026-04-04 00:42:26.376331 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-04 00:42:26.376336 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.108) 0:01:02.487 ******** 2026-04-04 00:42:26.376341 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376347 | orchestrator | 2026-04-04 00:42:26.376352 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-04 00:42:26.376358 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.240) 0:01:02.728 ******** 2026-04-04 00:42:26.376363 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376368 | orchestrator | 2026-04-04 00:42:26.376374 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-04 00:42:26.376379 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.126) 0:01:02.854 ******** 2026-04-04 00:42:26.376384 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376390 | orchestrator | 2026-04-04 00:42:26.376395 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-04 00:42:26.376400 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.131) 0:01:02.986 ******** 2026-04-04 00:42:26.376406 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376411 | orchestrator | 2026-04-04 00:42:26.376416 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-04 00:42:26.376422 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.151) 0:01:03.137 ******** 2026-04-04 00:42:26.376428 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376433 | orchestrator | 2026-04-04 00:42:26.376438 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-04 00:42:26.376444 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.133) 0:01:03.271 ******** 2026-04-04 00:42:26.376449 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376454 | orchestrator | 2026-04-04 00:42:26.376460 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-04 00:42:26.376465 | orchestrator | Saturday 04 April 2026 00:42:25 +0000 (0:00:00.146) 0:01:03.418 ******** 2026-04-04 00:42:26.376470 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:26.376476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:26.376482 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376487 | orchestrator | 2026-04-04 00:42:26.376492 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-04 00:42:26.376498 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.156) 0:01:03.575 ******** 2026-04-04 00:42:26.376510 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:26.376516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:26.376521 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:26.376527 | orchestrator | 2026-04-04 00:42:26.376532 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-04 00:42:26.376542 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.154) 0:01:03.730 ******** 2026-04-04 00:42:26.376551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.388216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.388313 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.388325 | orchestrator | 2026-04-04 00:42:29.389112 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-04 00:42:29.389172 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.152) 0:01:03.883 ******** 2026-04-04 00:42:29.389182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389217 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389224 | orchestrator | 2026-04-04 00:42:29.389231 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-04 00:42:29.389238 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.138) 0:01:04.021 ******** 2026-04-04 00:42:29.389245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389259 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389266 | orchestrator | 2026-04-04 00:42:29.389273 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-04 00:42:29.389280 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.148) 0:01:04.170 ******** 2026-04-04 00:42:29.389286 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389299 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389306 | orchestrator | 2026-04-04 00:42:29.389313 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-04 00:42:29.389320 | orchestrator | Saturday 04 April 2026 00:42:26 +0000 (0:00:00.152) 0:01:04.323 ******** 2026-04-04 00:42:29.389327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389341 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389348 | orchestrator | 2026-04-04 00:42:29.389354 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-04 00:42:29.389361 | orchestrator | Saturday 04 April 2026 00:42:27 +0000 (0:00:00.353) 0:01:04.677 ******** 2026-04-04 00:42:29.389368 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389380 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389403 | orchestrator | 2026-04-04 00:42:29.389410 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-04 00:42:29.389417 | orchestrator | Saturday 04 April 2026 00:42:27 +0000 (0:00:00.165) 0:01:04.842 ******** 2026-04-04 00:42:29.389423 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:29.389431 | orchestrator | 2026-04-04 00:42:29.389438 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-04 00:42:29.389444 | orchestrator | Saturday 04 April 2026 00:42:27 +0000 (0:00:00.506) 0:01:05.349 ******** 2026-04-04 00:42:29.389450 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:29.389457 | orchestrator | 2026-04-04 00:42:29.389464 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-04 00:42:29.389472 | orchestrator | Saturday 04 April 2026 00:42:28 +0000 (0:00:00.527) 0:01:05.876 ******** 2026-04-04 00:42:29.389478 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:29.389485 | orchestrator | 2026-04-04 00:42:29.389492 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-04 00:42:29.389499 | orchestrator | Saturday 04 April 2026 00:42:28 +0000 (0:00:00.150) 0:01:06.027 ******** 2026-04-04 00:42:29.389507 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'vg_name': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'}) 2026-04-04 00:42:29.389515 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'vg_name': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'}) 2026-04-04 00:42:29.389522 | orchestrator | 2026-04-04 00:42:29.389529 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-04 00:42:29.389536 | orchestrator | Saturday 04 April 2026 00:42:28 +0000 (0:00:00.158) 0:01:06.186 ******** 2026-04-04 00:42:29.389560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389568 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389575 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389582 | orchestrator | 2026-04-04 00:42:29.389589 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-04 00:42:29.389596 | orchestrator | Saturday 04 April 2026 00:42:28 +0000 (0:00:00.169) 0:01:06.355 ******** 2026-04-04 00:42:29.389608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389622 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389629 | orchestrator | 2026-04-04 00:42:29.389636 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-04 00:42:29.389643 | orchestrator | Saturday 04 April 2026 00:42:29 +0000 (0:00:00.147) 0:01:06.503 ******** 2026-04-04 00:42:29.389650 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'})  2026-04-04 00:42:29.389657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'})  2026-04-04 00:42:29.389664 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:29.389672 | orchestrator | 2026-04-04 00:42:29.389679 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-04 00:42:29.389686 | orchestrator | Saturday 04 April 2026 00:42:29 +0000 (0:00:00.147) 0:01:06.650 ******** 2026-04-04 00:42:29.389692 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 00:42:29.389700 | orchestrator |  "lvm_report": { 2026-04-04 00:42:29.389707 | orchestrator |  "lv": [ 2026-04-04 00:42:29.389721 | orchestrator |  { 2026-04-04 00:42:29.389729 | orchestrator |  "lv_name": "osd-block-35995e13-d19e-546f-ae20-ff296f4077c7", 2026-04-04 00:42:29.389737 | orchestrator |  "vg_name": "ceph-35995e13-d19e-546f-ae20-ff296f4077c7" 2026-04-04 00:42:29.389744 | orchestrator |  }, 2026-04-04 00:42:29.389777 | orchestrator |  { 2026-04-04 00:42:29.389783 | orchestrator |  "lv_name": "osd-block-92575011-0645-5cdf-badf-43ad86ae8159", 2026-04-04 00:42:29.389789 | orchestrator |  "vg_name": "ceph-92575011-0645-5cdf-badf-43ad86ae8159" 2026-04-04 00:42:29.389795 | orchestrator |  } 2026-04-04 00:42:29.389801 | orchestrator |  ], 2026-04-04 00:42:29.389807 | orchestrator |  "pv": [ 2026-04-04 00:42:29.389814 | orchestrator |  { 2026-04-04 00:42:29.389820 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-04 00:42:29.389827 | orchestrator |  "vg_name": "ceph-92575011-0645-5cdf-badf-43ad86ae8159" 2026-04-04 00:42:29.389833 | orchestrator |  }, 2026-04-04 00:42:29.389839 | orchestrator |  { 2026-04-04 00:42:29.389845 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-04 00:42:29.389851 | orchestrator |  "vg_name": "ceph-35995e13-d19e-546f-ae20-ff296f4077c7" 2026-04-04 00:42:29.389858 | orchestrator |  } 2026-04-04 00:42:29.389864 | orchestrator |  ] 2026-04-04 00:42:29.389870 | orchestrator |  } 2026-04-04 00:42:29.389877 | orchestrator | } 2026-04-04 00:42:29.389883 | orchestrator | 2026-04-04 00:42:29.389889 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:42:29.389896 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:42:29.389902 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:42:29.389908 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-04 00:42:29.389914 | orchestrator | 2026-04-04 00:42:29.389921 | orchestrator | 2026-04-04 00:42:29.389927 | orchestrator | 2026-04-04 00:42:29.389933 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:42:29.389939 | orchestrator | Saturday 04 April 2026 00:42:29 +0000 (0:00:00.146) 0:01:06.797 ******** 2026-04-04 00:42:29.389945 | orchestrator | =============================================================================== 2026-04-04 00:42:29.389951 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2026-04-04 00:42:29.389958 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-04-04 00:42:29.389964 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-04-04 00:42:29.389970 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2026-04-04 00:42:29.389976 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2026-04-04 00:42:29.389982 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2026-04-04 00:42:29.389988 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.46s 2026-04-04 00:42:29.389994 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-04-04 00:42:29.390007 | orchestrator | Add known links to the list of available block devices ------------------ 1.14s 2026-04-04 00:42:29.744743 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-04-04 00:42:29.744894 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-04-04 00:42:29.744904 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-04-04 00:42:29.744911 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2026-04-04 00:42:29.744917 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.69s 2026-04-04 00:42:29.744944 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.68s 2026-04-04 00:42:29.744950 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-04-04 00:42:29.744968 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-04-04 00:42:29.744974 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.63s 2026-04-04 00:42:29.744980 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.62s 2026-04-04 00:42:29.744986 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.61s 2026-04-04 00:42:41.478620 | orchestrator | 2026-04-04 00:42:41 | INFO  | Prepare task for execution of facts. 2026-04-04 00:42:41.554340 | orchestrator | 2026-04-04 00:42:41 | INFO  | Task 83db819e-a4f7-469f-95d4-04e547713c8e (facts) was prepared for execution. 2026-04-04 00:42:41.554430 | orchestrator | 2026-04-04 00:42:41 | INFO  | It takes a moment until task 83db819e-a4f7-469f-95d4-04e547713c8e (facts) has been started and output is visible here. 2026-04-04 00:42:53.517076 | orchestrator | 2026-04-04 00:42:53.517197 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 00:42:53.517210 | orchestrator | 2026-04-04 00:42:53.517229 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 00:42:53.517238 | orchestrator | Saturday 04 April 2026 00:42:44 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-04-04 00:42:53.517246 | orchestrator | ok: [testbed-manager] 2026-04-04 00:42:53.517257 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:42:53.517264 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:42:53.517271 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:42:53.517278 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:42:53.517285 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:53.517293 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:53.517301 | orchestrator | 2026-04-04 00:42:53.517309 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 00:42:53.517317 | orchestrator | Saturday 04 April 2026 00:42:45 +0000 (0:00:01.262) 0:00:01.557 ******** 2026-04-04 00:42:53.517324 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:42:53.517333 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:42:53.517340 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:42:53.517347 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:42:53.517355 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:42:53.517363 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:53.517371 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:53.517378 | orchestrator | 2026-04-04 00:42:53.517385 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 00:42:53.517392 | orchestrator | 2026-04-04 00:42:53.517399 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 00:42:53.517406 | orchestrator | Saturday 04 April 2026 00:42:46 +0000 (0:00:01.107) 0:00:02.664 ******** 2026-04-04 00:42:53.517414 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:42:53.517422 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:42:53.517429 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:42:53.517437 | orchestrator | ok: [testbed-manager] 2026-04-04 00:42:53.517444 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:42:53.517451 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:42:53.517458 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:42:53.517466 | orchestrator | 2026-04-04 00:42:53.517473 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 00:42:53.517481 | orchestrator | 2026-04-04 00:42:53.517489 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 00:42:53.517497 | orchestrator | Saturday 04 April 2026 00:42:52 +0000 (0:00:05.745) 0:00:08.410 ******** 2026-04-04 00:42:53.517505 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:42:53.517512 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:42:53.517542 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:42:53.517549 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:42:53.517557 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:42:53.517565 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:42:53.517571 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:42:53.517578 | orchestrator | 2026-04-04 00:42:53.517586 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:42:53.517594 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517603 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517610 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517617 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517625 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517632 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517641 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:42:53.517648 | orchestrator | 2026-04-04 00:42:53.517655 | orchestrator | 2026-04-04 00:42:53.517663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:42:53.517671 | orchestrator | Saturday 04 April 2026 00:42:53 +0000 (0:00:00.489) 0:00:08.900 ******** 2026-04-04 00:42:53.517679 | orchestrator | =============================================================================== 2026-04-04 00:42:53.517688 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2026-04-04 00:42:53.517696 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.26s 2026-04-04 00:42:53.517717 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-04-04 00:42:53.517725 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2026-04-04 00:43:04.972938 | orchestrator | 2026-04-04 00:43:04 | INFO  | Prepare task for execution of frr. 2026-04-04 00:43:05.043706 | orchestrator | 2026-04-04 00:43:05 | INFO  | Task 780b3325-2aeb-45b6-a24b-97f52c178c8c (frr) was prepared for execution. 2026-04-04 00:43:05.043852 | orchestrator | 2026-04-04 00:43:05 | INFO  | It takes a moment until task 780b3325-2aeb-45b6-a24b-97f52c178c8c (frr) has been started and output is visible here. 2026-04-04 00:43:27.402340 | orchestrator | 2026-04-04 00:43:27.402462 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-04 00:43:27.402484 | orchestrator | 2026-04-04 00:43:27.402500 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-04 00:43:27.402517 | orchestrator | Saturday 04 April 2026 00:43:07 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-04-04 00:43:27.402533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:43:27.402549 | orchestrator | 2026-04-04 00:43:27.402565 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-04 00:43:27.402581 | orchestrator | Saturday 04 April 2026 00:43:08 +0000 (0:00:00.198) 0:00:00.465 ******** 2026-04-04 00:43:27.402597 | orchestrator | changed: [testbed-manager] 2026-04-04 00:43:27.402614 | orchestrator | 2026-04-04 00:43:27.402630 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-04 00:43:27.402678 | orchestrator | Saturday 04 April 2026 00:43:09 +0000 (0:00:01.336) 0:00:01.801 ******** 2026-04-04 00:43:27.402695 | orchestrator | changed: [testbed-manager] 2026-04-04 00:43:27.402711 | orchestrator | 2026-04-04 00:43:27.402726 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-04 00:43:27.402807 | orchestrator | Saturday 04 April 2026 00:43:17 +0000 (0:00:08.238) 0:00:10.039 ******** 2026-04-04 00:43:27.402824 | orchestrator | ok: [testbed-manager] 2026-04-04 00:43:27.402841 | orchestrator | 2026-04-04 00:43:27.402859 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-04 00:43:27.402876 | orchestrator | Saturday 04 April 2026 00:43:18 +0000 (0:00:00.961) 0:00:11.001 ******** 2026-04-04 00:43:27.402892 | orchestrator | changed: [testbed-manager] 2026-04-04 00:43:27.402907 | orchestrator | 2026-04-04 00:43:27.402923 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-04 00:43:27.402940 | orchestrator | Saturday 04 April 2026 00:43:19 +0000 (0:00:00.860) 0:00:11.861 ******** 2026-04-04 00:43:27.402956 | orchestrator | ok: [testbed-manager] 2026-04-04 00:43:27.402972 | orchestrator | 2026-04-04 00:43:27.402988 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-04 00:43:27.403004 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:01.067) 0:00:12.929 ******** 2026-04-04 00:43:27.403019 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:43:27.403033 | orchestrator | 2026-04-04 00:43:27.403048 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-04 00:43:27.403063 | orchestrator | Saturday 04 April 2026 00:43:20 +0000 (0:00:00.140) 0:00:13.069 ******** 2026-04-04 00:43:27.403079 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:43:27.403096 | orchestrator | 2026-04-04 00:43:27.403112 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-04 00:43:27.403128 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.217) 0:00:13.287 ******** 2026-04-04 00:43:27.403144 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:43:27.403160 | orchestrator | 2026-04-04 00:43:27.403177 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-04 00:43:27.403194 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.143) 0:00:13.431 ******** 2026-04-04 00:43:27.403210 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:43:27.403226 | orchestrator | 2026-04-04 00:43:27.403242 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-04 00:43:27.403257 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.142) 0:00:13.574 ******** 2026-04-04 00:43:27.403273 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:43:27.403288 | orchestrator | 2026-04-04 00:43:27.403304 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-04 00:43:27.403320 | orchestrator | Saturday 04 April 2026 00:43:21 +0000 (0:00:00.131) 0:00:13.705 ******** 2026-04-04 00:43:27.403335 | orchestrator | changed: [testbed-manager] 2026-04-04 00:43:27.403351 | orchestrator | 2026-04-04 00:43:27.403366 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-04 00:43:27.403382 | orchestrator | Saturday 04 April 2026 00:43:22 +0000 (0:00:00.896) 0:00:14.601 ******** 2026-04-04 00:43:27.403397 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-04 00:43:27.403413 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-04 00:43:27.403431 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-04 00:43:27.403446 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-04 00:43:27.403462 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-04 00:43:27.403478 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-04 00:43:27.403506 | orchestrator | 2026-04-04 00:43:27.403522 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-04 00:43:27.403538 | orchestrator | Saturday 04 April 2026 00:43:24 +0000 (0:00:02.199) 0:00:16.801 ******** 2026-04-04 00:43:27.403554 | orchestrator | ok: [testbed-manager] 2026-04-04 00:43:27.403569 | orchestrator | 2026-04-04 00:43:27.403585 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-04 00:43:27.403601 | orchestrator | Saturday 04 April 2026 00:43:25 +0000 (0:00:01.191) 0:00:17.993 ******** 2026-04-04 00:43:27.403618 | orchestrator | changed: [testbed-manager] 2026-04-04 00:43:27.403633 | orchestrator | 2026-04-04 00:43:27.403649 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:43:27.403666 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:43:27.403681 | orchestrator | 2026-04-04 00:43:27.403697 | orchestrator | 2026-04-04 00:43:27.403735 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:43:27.403773 | orchestrator | Saturday 04 April 2026 00:43:27 +0000 (0:00:01.370) 0:00:19.363 ******** 2026-04-04 00:43:27.403788 | orchestrator | =============================================================================== 2026-04-04 00:43:27.403802 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.24s 2026-04-04 00:43:27.403838 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.20s 2026-04-04 00:43:27.403855 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.37s 2026-04-04 00:43:27.403870 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.34s 2026-04-04 00:43:27.403885 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.19s 2026-04-04 00:43:27.403901 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.07s 2026-04-04 00:43:27.403917 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.96s 2026-04-04 00:43:27.403933 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.90s 2026-04-04 00:43:27.403949 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.86s 2026-04-04 00:43:27.403964 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.22s 2026-04-04 00:43:27.403979 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2026-04-04 00:43:27.403995 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-04-04 00:43:27.404011 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-04-04 00:43:27.404025 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.14s 2026-04-04 00:43:27.404039 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-04-04 00:43:27.567794 | orchestrator | 2026-04-04 00:43:27.570552 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Apr 4 00:43:27 UTC 2026 2026-04-04 00:43:27.570606 | orchestrator | 2026-04-04 00:43:28.698704 | orchestrator | 2026-04-04 00:43:28 | INFO  | Collection nutshell is prepared for execution 2026-04-04 00:43:28.812409 | orchestrator | 2026-04-04 00:43:28 | INFO  | A [0] - dotfiles 2026-04-04 00:43:38.895208 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - homer 2026-04-04 00:43:38.895293 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - netdata 2026-04-04 00:43:38.895302 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - openstackclient 2026-04-04 00:43:38.895307 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - phpmyadmin 2026-04-04 00:43:38.895311 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - common 2026-04-04 00:43:38.899048 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- loadbalancer 2026-04-04 00:43:38.899522 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [2] --- opensearch 2026-04-04 00:43:38.900000 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [2] --- mariadb-ng 2026-04-04 00:43:38.900656 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [3] ---- horizon 2026-04-04 00:43:38.900984 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [3] ---- keystone 2026-04-04 00:43:38.901796 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- neutron 2026-04-04 00:43:38.902310 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ wait-for-nova 2026-04-04 00:43:38.902758 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [6] ------- octavia 2026-04-04 00:43:38.904446 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- barbican 2026-04-04 00:43:38.904467 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- designate 2026-04-04 00:43:38.904471 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- ironic 2026-04-04 00:43:38.904643 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- placement 2026-04-04 00:43:38.904650 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- magnum 2026-04-04 00:43:38.906227 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- openvswitch 2026-04-04 00:43:38.907258 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [2] --- ovn 2026-04-04 00:43:38.907271 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- memcached 2026-04-04 00:43:38.907277 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- redis 2026-04-04 00:43:38.907283 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- rabbitmq-ng 2026-04-04 00:43:38.907643 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - kubernetes 2026-04-04 00:43:38.910231 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- kubeconfig 2026-04-04 00:43:38.910260 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- copy-kubeconfig 2026-04-04 00:43:38.910320 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [0] - ceph 2026-04-04 00:43:38.912404 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [1] -- ceph-pools 2026-04-04 00:43:38.912429 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [2] --- copy-ceph-keys 2026-04-04 00:43:38.912705 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [3] ---- cephclient 2026-04-04 00:43:38.912713 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-04-04 00:43:38.913000 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- wait-for-keystone 2026-04-04 00:43:38.913067 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ kolla-ceph-rgw 2026-04-04 00:43:38.913414 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ glance 2026-04-04 00:43:38.913421 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ cinder 2026-04-04 00:43:38.913521 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ nova 2026-04-04 00:43:38.913865 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [4] ----- prometheus 2026-04-04 00:43:38.914000 | orchestrator | 2026-04-04 00:43:38 | INFO  | A [5] ------ grafana 2026-04-04 00:43:39.136281 | orchestrator | 2026-04-04 00:43:39 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-04-04 00:43:39.136719 | orchestrator | 2026-04-04 00:43:39 | INFO  | Tasks are running in the background 2026-04-04 00:43:41.200249 | orchestrator | 2026-04-04 00:43:41 | INFO  | No task IDs specified, wait for all currently running tasks 2026-04-04 00:43:43.405312 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:43.405848 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:43.406581 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:43.407440 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:43.408233 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:43.410438 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:43.411355 | orchestrator | 2026-04-04 00:43:43 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:43.411523 | orchestrator | 2026-04-04 00:43:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:43:46.571854 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:46.571965 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:46.571982 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:46.571994 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:46.572005 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:46.572016 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:46.572027 | orchestrator | 2026-04-04 00:43:46 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:46.572038 | orchestrator | 2026-04-04 00:43:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:43:49.783655 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:49.783869 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:49.783900 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:49.783922 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:49.783942 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:49.783960 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:49.783999 | orchestrator | 2026-04-04 00:43:49 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:49.784011 | orchestrator | 2026-04-04 00:43:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:43:52.658349 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:52.658455 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:52.659448 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:52.661278 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:52.661306 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:52.661311 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:52.662447 | orchestrator | 2026-04-04 00:43:52 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:52.662477 | orchestrator | 2026-04-04 00:43:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:43:55.703255 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:55.703355 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:55.703872 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:55.704082 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:55.704596 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:55.704900 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:55.708892 | orchestrator | 2026-04-04 00:43:55 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:55.708960 | orchestrator | 2026-04-04 00:43:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:43:58.786932 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:43:58.788810 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:43:58.790412 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:43:58.791627 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:43:58.793072 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:43:58.795261 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:43:58.795691 | orchestrator | 2026-04-04 00:43:58 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:43:58.795865 | orchestrator | 2026-04-04 00:43:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:01.956279 | orchestrator | 2026-04-04 00:44:01 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:02.209079 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:02.209181 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:02.209197 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state STARTED 2026-04-04 00:44:02.209209 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:02.209221 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:02.209234 | orchestrator | 2026-04-04 00:44:02 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:02.209247 | orchestrator | 2026-04-04 00:44:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:05.289236 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:05.292478 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:05.294316 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:05.296010 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:05.296797 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task a905be9b-b18a-4d22-a561-6adddefe96db is in state SUCCESS 2026-04-04 00:44:05.297803 | orchestrator | 2026-04-04 00:44:05.297842 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-04-04 00:44:05.297851 | orchestrator | 2026-04-04 00:44:05.297858 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-04-04 00:44:05.297866 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:00.547) 0:00:00.547 ******** 2026-04-04 00:44:05.297872 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:44:05.297880 | orchestrator | changed: [testbed-manager] 2026-04-04 00:44:05.297886 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:44:05.297892 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:44:05.297911 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:44:05.297917 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:44:05.297935 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:44:05.297941 | orchestrator | 2026-04-04 00:44:05.297948 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-04-04 00:44:05.297954 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:04.933) 0:00:05.480 ******** 2026-04-04 00:44:05.297968 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:44:05.297976 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:44:05.297984 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:44:05.297990 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:44:05.297996 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:44:05.298003 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:44:05.298010 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:44:05.298053 | orchestrator | 2026-04-04 00:44:05.298061 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-04-04 00:44:05.298069 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:01.413) 0:00:06.894 ******** 2026-04-04 00:44:05.298081 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:53.950278', 'end': '2026-04-04 00:43:53.954356', 'delta': '0:00:00.004078', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298091 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.010940', 'end': '2026-04-04 00:43:54.020629', 'delta': '0:00:00.009689', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298145 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.228204', 'end': '2026-04-04 00:43:54.235680', 'delta': '0:00:00.007476', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298175 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.317637', 'end': '2026-04-04 00:43:54.327463', 'delta': '0:00:00.009826', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298184 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.357148', 'end': '2026-04-04 00:43:54.366996', 'delta': '0:00:00.009848', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298191 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.575904', 'end': '2026-04-04 00:43:54.586505', 'delta': '0:00:00.010601', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298199 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-04-04 00:43:54.655664', 'end': '2026-04-04 00:43:54.662656', 'delta': '0:00:00.006992', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-04-04 00:44:05.298212 | orchestrator | 2026-04-04 00:44:05.298218 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-04-04 00:44:05.298225 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:01.962) 0:00:08.856 ******** 2026-04-04 00:44:05.298231 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:44:05.298238 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:44:05.298244 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:44:05.298250 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:44:05.298257 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:44:05.298264 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:44:05.298271 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:44:05.298277 | orchestrator | 2026-04-04 00:44:05.298285 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-04-04 00:44:05.298291 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:01.430) 0:00:10.287 ******** 2026-04-04 00:44:05.298299 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-04-04 00:44:05.298306 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-04-04 00:44:05.298312 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-04-04 00:44:05.298319 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-04-04 00:44:05.298326 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-04-04 00:44:05.298333 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-04-04 00:44:05.298340 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-04-04 00:44:05.298346 | orchestrator | 2026-04-04 00:44:05.298354 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:44:05.298367 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:44:05.298377 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:44:05.298384 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:44:05.298390 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:44:05.301103 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:05.301345 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:05.302654 | orchestrator | 2026-04-04 00:44:05 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:05.302683 | orchestrator | 2026-04-04 00:44:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:08.373707 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:08.373859 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:08.373884 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:08.373902 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:08.373914 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:08.373925 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:08.373965 | orchestrator | 2026-04-04 00:44:08 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:08.373978 | orchestrator | 2026-04-04 00:44:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:11.393160 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:11.393424 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:11.393974 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:11.394687 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:11.397383 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:11.397428 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:11.397459 | orchestrator | 2026-04-04 00:44:11 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:11.398160 | orchestrator | 2026-04-04 00:44:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:14.433605 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:14.433689 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:14.435562 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:14.435619 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:14.437420 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:14.438715 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:14.439124 | orchestrator | 2026-04-04 00:44:14 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:14.439158 | orchestrator | 2026-04-04 00:44:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:17.507267 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:17.508925 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:17.513087 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:17.514925 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:17.519793 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:17.519862 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:17.523965 | orchestrator | 2026-04-04 00:44:17 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:17.524038 | orchestrator | 2026-04-04 00:44:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:20.865551 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:20.865621 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:20.865650 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:20.865654 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:20.865658 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:20.865662 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:20.865666 | orchestrator | 2026-04-04 00:44:20 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:20.865671 | orchestrator | 2026-04-04 00:44:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:23.847299 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:23.848348 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:23.851481 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:23.851538 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:23.853104 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:23.855170 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state STARTED 2026-04-04 00:44:23.857232 | orchestrator | 2026-04-04 00:44:23 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:23.857391 | orchestrator | 2026-04-04 00:44:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:27.293064 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:27.293179 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:27.293199 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:27.293204 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:27.293208 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:27.293212 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task 466af821-6e3d-41e5-997d-c8e726c0f7e3 is in state SUCCESS 2026-04-04 00:44:27.293216 | orchestrator | 2026-04-04 00:44:26 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:27.293220 | orchestrator | 2026-04-04 00:44:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:29.989525 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:29.989604 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:29.990953 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:29.992791 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:29.996480 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:29.998190 | orchestrator | 2026-04-04 00:44:29 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:29.998254 | orchestrator | 2026-04-04 00:44:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:33.032206 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:33.034412 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:33.035792 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:33.041791 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:33.043281 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:33.045979 | orchestrator | 2026-04-04 00:44:33 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:33.046071 | orchestrator | 2026-04-04 00:44:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:36.081388 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:36.081635 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:36.082684 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:36.083632 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:36.093093 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state STARTED 2026-04-04 00:44:36.093670 | orchestrator | 2026-04-04 00:44:36 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:36.093714 | orchestrator | 2026-04-04 00:44:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:39.137513 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:39.146551 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:39.148077 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:39.150530 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:39.167412 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task 6a20da85-6c7f-4f07-a030-d8cc9353c970 is in state SUCCESS 2026-04-04 00:44:39.174829 | orchestrator | 2026-04-04 00:44:39 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:39.174891 | orchestrator | 2026-04-04 00:44:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:42.227551 | orchestrator | 2026-04-04 00:44:42 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:42.228543 | orchestrator | 2026-04-04 00:44:42 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:42.229984 | orchestrator | 2026-04-04 00:44:42 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:42.231543 | orchestrator | 2026-04-04 00:44:42 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:42.233092 | orchestrator | 2026-04-04 00:44:42 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:42.233286 | orchestrator | 2026-04-04 00:44:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:45.284486 | orchestrator | 2026-04-04 00:44:45 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:45.288216 | orchestrator | 2026-04-04 00:44:45 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:45.290603 | orchestrator | 2026-04-04 00:44:45 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:45.293848 | orchestrator | 2026-04-04 00:44:45 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:45.296578 | orchestrator | 2026-04-04 00:44:45 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:45.296650 | orchestrator | 2026-04-04 00:44:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:48.377082 | orchestrator | 2026-04-04 00:44:48 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:48.377133 | orchestrator | 2026-04-04 00:44:48 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:48.378371 | orchestrator | 2026-04-04 00:44:48 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:48.380692 | orchestrator | 2026-04-04 00:44:48 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:48.382373 | orchestrator | 2026-04-04 00:44:48 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:48.385272 | orchestrator | 2026-04-04 00:44:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:51.436631 | orchestrator | 2026-04-04 00:44:51 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:51.438530 | orchestrator | 2026-04-04 00:44:51 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:51.439526 | orchestrator | 2026-04-04 00:44:51 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:51.443883 | orchestrator | 2026-04-04 00:44:51 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:51.445575 | orchestrator | 2026-04-04 00:44:51 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:51.445615 | orchestrator | 2026-04-04 00:44:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:54.475176 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:54.476132 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:54.477543 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:54.479356 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:54.481245 | orchestrator | 2026-04-04 00:44:54 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:54.481287 | orchestrator | 2026-04-04 00:44:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:44:57.518244 | orchestrator | 2026-04-04 00:44:57 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:44:57.519121 | orchestrator | 2026-04-04 00:44:57 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:44:57.520064 | orchestrator | 2026-04-04 00:44:57 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:44:57.521583 | orchestrator | 2026-04-04 00:44:57 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:44:57.522180 | orchestrator | 2026-04-04 00:44:57 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:44:57.522195 | orchestrator | 2026-04-04 00:44:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:00.582979 | orchestrator | 2026-04-04 00:45:00 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:00.585408 | orchestrator | 2026-04-04 00:45:00 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:00.585460 | orchestrator | 2026-04-04 00:45:00 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:45:00.585859 | orchestrator | 2026-04-04 00:45:00 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:00.589439 | orchestrator | 2026-04-04 00:45:00 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:00.589490 | orchestrator | 2026-04-04 00:45:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:03.682202 | orchestrator | 2026-04-04 00:45:03 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:03.682302 | orchestrator | 2026-04-04 00:45:03 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:03.685439 | orchestrator | 2026-04-04 00:45:03 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state STARTED 2026-04-04 00:45:03.685495 | orchestrator | 2026-04-04 00:45:03 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:03.686110 | orchestrator | 2026-04-04 00:45:03 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:03.686134 | orchestrator | 2026-04-04 00:45:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:06.737675 | orchestrator | 2026-04-04 00:45:06 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:06.738154 | orchestrator | 2026-04-04 00:45:06 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:06.743752 | orchestrator | 2026-04-04 00:45:06.743813 | orchestrator | 2026-04-04 00:45:06.743820 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-04-04 00:45:06.743827 | orchestrator | 2026-04-04 00:45:06.743832 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-04-04 00:45:06.743839 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:00.742) 0:00:00.742 ******** 2026-04-04 00:45:06.743844 | orchestrator | ok: [testbed-manager] => { 2026-04-04 00:45:06.743852 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-04-04 00:45:06.743858 | orchestrator | } 2026-04-04 00:45:06.743863 | orchestrator | 2026-04-04 00:45:06.743877 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-04-04 00:45:06.743883 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.651) 0:00:01.393 ******** 2026-04-04 00:45:06.743888 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.743895 | orchestrator | 2026-04-04 00:45:06.743908 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-04-04 00:45:06.743914 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:02.014) 0:00:03.408 ******** 2026-04-04 00:45:06.743919 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-04-04 00:45:06.743925 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-04-04 00:45:06.743930 | orchestrator | 2026-04-04 00:45:06.743936 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-04-04 00:45:06.743941 | orchestrator | Saturday 04 April 2026 00:43:52 +0000 (0:00:01.344) 0:00:04.753 ******** 2026-04-04 00:45:06.743946 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.743971 | orchestrator | 2026-04-04 00:45:06.743977 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-04-04 00:45:06.743982 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:02.714) 0:00:07.467 ******** 2026-04-04 00:45:06.743987 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.743992 | orchestrator | 2026-04-04 00:45:06.743997 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-04-04 00:45:06.744002 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:01.020) 0:00:08.488 ******** 2026-04-04 00:45:06.744007 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-04-04 00:45:06.744012 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744018 | orchestrator | 2026-04-04 00:45:06.744023 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-04-04 00:45:06.744047 | orchestrator | Saturday 04 April 2026 00:44:21 +0000 (0:00:25.233) 0:00:33.721 ******** 2026-04-04 00:45:06.744052 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744057 | orchestrator | 2026-04-04 00:45:06.744062 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:06.744068 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:06.744074 | orchestrator | 2026-04-04 00:45:06.744079 | orchestrator | 2026-04-04 00:45:06.744084 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:06.744090 | orchestrator | Saturday 04 April 2026 00:44:23 +0000 (0:00:01.962) 0:00:35.684 ******** 2026-04-04 00:45:06.744095 | orchestrator | =============================================================================== 2026-04-04 00:45:06.744100 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.23s 2026-04-04 00:45:06.744105 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.71s 2026-04-04 00:45:06.744120 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.01s 2026-04-04 00:45:06.744126 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.96s 2026-04-04 00:45:06.744131 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.34s 2026-04-04 00:45:06.744136 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.02s 2026-04-04 00:45:06.744141 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.65s 2026-04-04 00:45:06.744146 | orchestrator | 2026-04-04 00:45:06.744151 | orchestrator | 2026-04-04 00:45:06.744157 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-04 00:45:06.744162 | orchestrator | 2026-04-04 00:45:06.744167 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-04 00:45:06.744172 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:00.789) 0:00:00.790 ******** 2026-04-04 00:45:06.744177 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-04 00:45:06.744184 | orchestrator | 2026-04-04 00:45:06.744189 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-04 00:45:06.744194 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:00.719) 0:00:01.509 ******** 2026-04-04 00:45:06.744199 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-04 00:45:06.744204 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-04 00:45:06.744209 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-04 00:45:06.744214 | orchestrator | 2026-04-04 00:45:06.744219 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-04 00:45:06.744225 | orchestrator | Saturday 04 April 2026 00:43:52 +0000 (0:00:02.749) 0:00:04.259 ******** 2026-04-04 00:45:06.744230 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744235 | orchestrator | 2026-04-04 00:45:06.744240 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-04 00:45:06.744250 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:03.028) 0:00:07.288 ******** 2026-04-04 00:45:06.744267 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-04 00:45:06.744272 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744277 | orchestrator | 2026-04-04 00:45:06.744282 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-04 00:45:06.744288 | orchestrator | Saturday 04 April 2026 00:44:30 +0000 (0:00:35.213) 0:00:42.501 ******** 2026-04-04 00:45:06.744293 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744298 | orchestrator | 2026-04-04 00:45:06.744303 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-04 00:45:06.744308 | orchestrator | Saturday 04 April 2026 00:44:32 +0000 (0:00:01.527) 0:00:44.029 ******** 2026-04-04 00:45:06.744313 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744318 | orchestrator | 2026-04-04 00:45:06.744323 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-04 00:45:06.744328 | orchestrator | Saturday 04 April 2026 00:44:32 +0000 (0:00:00.855) 0:00:44.884 ******** 2026-04-04 00:45:06.744333 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744338 | orchestrator | 2026-04-04 00:45:06.744343 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-04 00:45:06.744348 | orchestrator | Saturday 04 April 2026 00:44:35 +0000 (0:00:02.112) 0:00:46.996 ******** 2026-04-04 00:45:06.744353 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744358 | orchestrator | 2026-04-04 00:45:06.744363 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-04 00:45:06.744368 | orchestrator | Saturday 04 April 2026 00:44:36 +0000 (0:00:01.158) 0:00:48.155 ******** 2026-04-04 00:45:06.744373 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744378 | orchestrator | 2026-04-04 00:45:06.744383 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-04 00:45:06.744389 | orchestrator | Saturday 04 April 2026 00:44:36 +0000 (0:00:00.583) 0:00:48.739 ******** 2026-04-04 00:45:06.744393 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744398 | orchestrator | 2026-04-04 00:45:06.744404 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:06.744409 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:06.744414 | orchestrator | 2026-04-04 00:45:06.744419 | orchestrator | 2026-04-04 00:45:06.744424 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:06.744429 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.351) 0:00:49.091 ******** 2026-04-04 00:45:06.744435 | orchestrator | =============================================================================== 2026-04-04 00:45:06.744440 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.21s 2026-04-04 00:45:06.744445 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.03s 2026-04-04 00:45:06.744450 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.75s 2026-04-04 00:45:06.744455 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.11s 2026-04-04 00:45:06.744460 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.53s 2026-04-04 00:45:06.744465 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.16s 2026-04-04 00:45:06.744470 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.86s 2026-04-04 00:45:06.744475 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.72s 2026-04-04 00:45:06.744480 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2026-04-04 00:45:06.744486 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2026-04-04 00:45:06.744494 | orchestrator | 2026-04-04 00:45:06.744499 | orchestrator | 2026-04-04 00:45:06.744505 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-04-04 00:45:06.744510 | orchestrator | 2026-04-04 00:45:06.744515 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-04-04 00:45:06.744521 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.591) 0:00:00.591 ******** 2026-04-04 00:45:06.744526 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744531 | orchestrator | 2026-04-04 00:45:06.744536 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-04-04 00:45:06.744541 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:01.800) 0:00:02.392 ******** 2026-04-04 00:45:06.744546 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-04-04 00:45:06.744551 | orchestrator | 2026-04-04 00:45:06.744557 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-04-04 00:45:06.744562 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:00.594) 0:00:02.986 ******** 2026-04-04 00:45:06.744567 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744572 | orchestrator | 2026-04-04 00:45:06.744577 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-04-04 00:45:06.744610 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:02.091) 0:00:05.077 ******** 2026-04-04 00:45:06.744616 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-04-04 00:45:06.744621 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:06.744626 | orchestrator | 2026-04-04 00:45:06.744631 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-04-04 00:45:06.744636 | orchestrator | Saturday 04 April 2026 00:45:01 +0000 (0:00:50.812) 0:00:55.890 ******** 2026-04-04 00:45:06.744641 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:06.744647 | orchestrator | 2026-04-04 00:45:06.744652 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:06.744657 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:06.744662 | orchestrator | 2026-04-04 00:45:06.744667 | orchestrator | 2026-04-04 00:45:06.744672 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:06.744681 | orchestrator | Saturday 04 April 2026 00:45:05 +0000 (0:00:03.872) 0:00:59.763 ******** 2026-04-04 00:45:06.744686 | orchestrator | =============================================================================== 2026-04-04 00:45:06.744691 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 50.81s 2026-04-04 00:45:06.744755 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.87s 2026-04-04 00:45:06.744763 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.09s 2026-04-04 00:45:06.744769 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.80s 2026-04-04 00:45:06.744774 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2026-04-04 00:45:06.744779 | orchestrator | 2026-04-04 00:45:06 | INFO  | Task e09ba297-84da-4124-a120-72b081b99905 is in state SUCCESS 2026-04-04 00:45:06.744784 | orchestrator | 2026-04-04 00:45:06 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:06.744792 | orchestrator | 2026-04-04 00:45:06 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:06.744797 | orchestrator | 2026-04-04 00:45:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:09.791339 | orchestrator | 2026-04-04 00:45:09 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:09.792844 | orchestrator | 2026-04-04 00:45:09 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:09.793576 | orchestrator | 2026-04-04 00:45:09 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:09.795221 | orchestrator | 2026-04-04 00:45:09 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:09.795318 | orchestrator | 2026-04-04 00:45:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:12.846563 | orchestrator | 2026-04-04 00:45:12 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:12.849488 | orchestrator | 2026-04-04 00:45:12 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:12.851914 | orchestrator | 2026-04-04 00:45:12 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:12.854628 | orchestrator | 2026-04-04 00:45:12 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:12.854677 | orchestrator | 2026-04-04 00:45:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:15.906529 | orchestrator | 2026-04-04 00:45:15 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:15.907220 | orchestrator | 2026-04-04 00:45:15 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:15.908279 | orchestrator | 2026-04-04 00:45:15 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state STARTED 2026-04-04 00:45:15.910191 | orchestrator | 2026-04-04 00:45:15 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:15.910261 | orchestrator | 2026-04-04 00:45:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:18.952566 | orchestrator | 2026-04-04 00:45:18 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:18.953341 | orchestrator | 2026-04-04 00:45:18 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:18.955093 | orchestrator | 2026-04-04 00:45:18 | INFO  | Task c9493231-62e7-41d8-9fbc-fd7c43ed52c7 is in state SUCCESS 2026-04-04 00:45:18.955812 | orchestrator | 2026-04-04 00:45:18.955903 | orchestrator | 2026-04-04 00:45:18.955928 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:45:18.955945 | orchestrator | 2026-04-04 00:45:18.955960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:45:18.955977 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:00.740) 0:00:00.740 ******** 2026-04-04 00:45:18.955992 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-04 00:45:18.956009 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-04 00:45:18.956025 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-04 00:45:18.956041 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-04 00:45:18.956057 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-04 00:45:18.956072 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-04 00:45:18.956086 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-04 00:45:18.956102 | orchestrator | 2026-04-04 00:45:18.956118 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-04 00:45:18.956134 | orchestrator | 2026-04-04 00:45:18.956149 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-04 00:45:18.956164 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:01.649) 0:00:02.390 ******** 2026-04-04 00:45:18.956199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:45:18.956225 | orchestrator | 2026-04-04 00:45:18.956241 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-04 00:45:18.956274 | orchestrator | Saturday 04 April 2026 00:43:52 +0000 (0:00:01.495) 0:00:03.885 ******** 2026-04-04 00:45:18.956284 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:18.956295 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:18.956306 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:18.956316 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:18.956326 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:18.956337 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:18.956346 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:18.956356 | orchestrator | 2026-04-04 00:45:18.956367 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-04 00:45:18.956377 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:03.162) 0:00:07.048 ******** 2026-04-04 00:45:18.956387 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:18.956403 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:18.956416 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:18.956430 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:18.956447 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:18.956464 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:18.956479 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:18.956491 | orchestrator | 2026-04-04 00:45:18.956503 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-04 00:45:18.956514 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:03.045) 0:00:10.093 ******** 2026-04-04 00:45:18.956525 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:18.956535 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:18.956545 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:18.956555 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.956565 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:18.956576 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:18.956585 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:18.956595 | orchestrator | 2026-04-04 00:45:18.956605 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-04 00:45:18.956615 | orchestrator | Saturday 04 April 2026 00:44:00 +0000 (0:00:02.560) 0:00:12.654 ******** 2026-04-04 00:45:18.956626 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:18.956635 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:18.956645 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:18.956654 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:18.956664 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:18.956674 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:18.956683 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.956696 | orchestrator | 2026-04-04 00:45:18.956781 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-04 00:45:18.956792 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:10.434) 0:00:23.089 ******** 2026-04-04 00:45:18.956803 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:18.956812 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:18.956823 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:18.956836 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:18.956850 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:18.956860 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:18.956869 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.956879 | orchestrator | 2026-04-04 00:45:18.956889 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-04 00:45:18.956899 | orchestrator | Saturday 04 April 2026 00:44:49 +0000 (0:00:38.432) 0:01:01.522 ******** 2026-04-04 00:45:18.956917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:45:18.956930 | orchestrator | 2026-04-04 00:45:18.956940 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-04 00:45:18.956950 | orchestrator | Saturday 04 April 2026 00:44:51 +0000 (0:00:01.519) 0:01:03.041 ******** 2026-04-04 00:45:18.956971 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-04 00:45:18.956982 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-04 00:45:18.956993 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-04 00:45:18.957002 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-04 00:45:18.957031 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-04 00:45:18.957043 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-04 00:45:18.957053 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-04 00:45:18.957062 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-04 00:45:18.957072 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-04 00:45:18.957082 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-04 00:45:18.957092 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-04 00:45:18.957102 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-04 00:45:18.957112 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-04 00:45:18.957122 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-04 00:45:18.957131 | orchestrator | 2026-04-04 00:45:18.957303 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-04 00:45:18.957322 | orchestrator | Saturday 04 April 2026 00:44:56 +0000 (0:00:04.741) 0:01:07.783 ******** 2026-04-04 00:45:18.957332 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:18.957342 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:18.957352 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:18.957362 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:18.957372 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:18.957382 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:18.957392 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:18.957402 | orchestrator | 2026-04-04 00:45:18.957412 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-04 00:45:18.957422 | orchestrator | Saturday 04 April 2026 00:44:57 +0000 (0:00:01.312) 0:01:09.096 ******** 2026-04-04 00:45:18.957433 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.957443 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:18.957454 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:18.957463 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:18.957474 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:18.957489 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:18.957502 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:18.957512 | orchestrator | 2026-04-04 00:45:18.957522 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-04 00:45:18.957532 | orchestrator | Saturday 04 April 2026 00:44:58 +0000 (0:00:01.204) 0:01:10.300 ******** 2026-04-04 00:45:18.957542 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:18.957552 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:18.957577 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:18.957587 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:18.957596 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:18.957607 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:18.957617 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:18.957627 | orchestrator | 2026-04-04 00:45:18.957637 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-04 00:45:18.957758 | orchestrator | Saturday 04 April 2026 00:44:59 +0000 (0:00:01.331) 0:01:11.631 ******** 2026-04-04 00:45:18.957769 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:18.957778 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:18.957787 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:18.957796 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:18.957805 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:18.957813 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:18.957821 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:18.957830 | orchestrator | 2026-04-04 00:45:18.957849 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-04 00:45:18.957858 | orchestrator | Saturday 04 April 2026 00:45:01 +0000 (0:00:01.864) 0:01:13.496 ******** 2026-04-04 00:45:18.957867 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-04 00:45:18.957879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:45:18.957888 | orchestrator | 2026-04-04 00:45:18.957896 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-04 00:45:18.957905 | orchestrator | Saturday 04 April 2026 00:45:03 +0000 (0:00:01.476) 0:01:14.972 ******** 2026-04-04 00:45:18.957914 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.957923 | orchestrator | 2026-04-04 00:45:18.957931 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-04 00:45:18.957940 | orchestrator | Saturday 04 April 2026 00:45:05 +0000 (0:00:02.123) 0:01:17.096 ******** 2026-04-04 00:45:18.957949 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:18.957958 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:18.957966 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:18.957975 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:18.957984 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:18.957993 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:18.958002 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:18.958011 | orchestrator | 2026-04-04 00:45:18.958075 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:18.958086 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958103 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958112 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958121 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958142 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958151 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958160 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:45:18.958169 | orchestrator | 2026-04-04 00:45:18.958178 | orchestrator | 2026-04-04 00:45:18.958187 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:18.958196 | orchestrator | Saturday 04 April 2026 00:45:16 +0000 (0:00:11.538) 0:01:28.634 ******** 2026-04-04 00:45:18.958205 | orchestrator | =============================================================================== 2026-04-04 00:45:18.958215 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.43s 2026-04-04 00:45:18.958223 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.54s 2026-04-04 00:45:18.958232 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.43s 2026-04-04 00:45:18.958241 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.74s 2026-04-04 00:45:18.958249 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.17s 2026-04-04 00:45:18.958258 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.04s 2026-04-04 00:45:18.958274 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.56s 2026-04-04 00:45:18.958283 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.12s 2026-04-04 00:45:18.958291 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.86s 2026-04-04 00:45:18.958300 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.66s 2026-04-04 00:45:18.958309 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.52s 2026-04-04 00:45:18.958318 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.49s 2026-04-04 00:45:18.958326 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.48s 2026-04-04 00:45:18.958335 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.33s 2026-04-04 00:45:18.958344 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.31s 2026-04-04 00:45:18.958353 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.20s 2026-04-04 00:45:18.958363 | orchestrator | 2026-04-04 00:45:18 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:18.958374 | orchestrator | 2026-04-04 00:45:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:22.008204 | orchestrator | 2026-04-04 00:45:22 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:22.010971 | orchestrator | 2026-04-04 00:45:22 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:22.012469 | orchestrator | 2026-04-04 00:45:22 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:22.012653 | orchestrator | 2026-04-04 00:45:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:25.080003 | orchestrator | 2026-04-04 00:45:25 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:25.084277 | orchestrator | 2026-04-04 00:45:25 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:25.087822 | orchestrator | 2026-04-04 00:45:25 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:25.090409 | orchestrator | 2026-04-04 00:45:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:28.141116 | orchestrator | 2026-04-04 00:45:28 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:28.143372 | orchestrator | 2026-04-04 00:45:28 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:28.145601 | orchestrator | 2026-04-04 00:45:28 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:28.146409 | orchestrator | 2026-04-04 00:45:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:31.199230 | orchestrator | 2026-04-04 00:45:31 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:31.200603 | orchestrator | 2026-04-04 00:45:31 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:31.201429 | orchestrator | 2026-04-04 00:45:31 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:31.201457 | orchestrator | 2026-04-04 00:45:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:34.233449 | orchestrator | 2026-04-04 00:45:34 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:34.234191 | orchestrator | 2026-04-04 00:45:34 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:34.235233 | orchestrator | 2026-04-04 00:45:34 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:34.235284 | orchestrator | 2026-04-04 00:45:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:37.267898 | orchestrator | 2026-04-04 00:45:37 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:37.268947 | orchestrator | 2026-04-04 00:45:37 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:37.270627 | orchestrator | 2026-04-04 00:45:37 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:37.270679 | orchestrator | 2026-04-04 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:40.303057 | orchestrator | 2026-04-04 00:45:40 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:40.304157 | orchestrator | 2026-04-04 00:45:40 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:40.305951 | orchestrator | 2026-04-04 00:45:40 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:40.305985 | orchestrator | 2026-04-04 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:43.343369 | orchestrator | 2026-04-04 00:45:43 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:43.344919 | orchestrator | 2026-04-04 00:45:43 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:43.347086 | orchestrator | 2026-04-04 00:45:43 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:43.347142 | orchestrator | 2026-04-04 00:45:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:46.387773 | orchestrator | 2026-04-04 00:45:46 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:46.389295 | orchestrator | 2026-04-04 00:45:46 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:46.390858 | orchestrator | 2026-04-04 00:45:46 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:46.390925 | orchestrator | 2026-04-04 00:45:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:49.428422 | orchestrator | 2026-04-04 00:45:49 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:49.429403 | orchestrator | 2026-04-04 00:45:49 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:49.431080 | orchestrator | 2026-04-04 00:45:49 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:49.431205 | orchestrator | 2026-04-04 00:45:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:52.464444 | orchestrator | 2026-04-04 00:45:52 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:52.466256 | orchestrator | 2026-04-04 00:45:52 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state STARTED 2026-04-04 00:45:52.469369 | orchestrator | 2026-04-04 00:45:52 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:52.469423 | orchestrator | 2026-04-04 00:45:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:55.505147 | orchestrator | 2026-04-04 00:45:55 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:55.507667 | orchestrator | 2026-04-04 00:45:55 | INFO  | Task e92b4c8b-3e9c-4245-b6a2-3507e187cb8a is in state SUCCESS 2026-04-04 00:45:55.510511 | orchestrator | 2026-04-04 00:45:55.510579 | orchestrator | 2026-04-04 00:45:55.510599 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-04 00:45:55.510617 | orchestrator | 2026-04-04 00:45:55.510655 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-04 00:45:55.510673 | orchestrator | Saturday 04 April 2026 00:43:42 +0000 (0:00:00.362) 0:00:00.362 ******** 2026-04-04 00:45:55.510723 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:45:55.510741 | orchestrator | 2026-04-04 00:45:55.510760 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-04 00:45:55.510778 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:01.208) 0:00:01.570 ******** 2026-04-04 00:45:55.510796 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510812 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510829 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510848 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.510866 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510883 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510901 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.510919 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510937 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.510955 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-04 00:45:55.510973 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.510990 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511009 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.511027 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.511045 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511064 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511083 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-04 00:45:55.511102 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511121 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511140 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511159 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-04 00:45:55.511178 | orchestrator | 2026-04-04 00:45:55.511196 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-04 00:45:55.511216 | orchestrator | Saturday 04 April 2026 00:43:47 +0000 (0:00:03.616) 0:00:05.187 ******** 2026-04-04 00:45:55.511236 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:45:55.511256 | orchestrator | 2026-04-04 00:45:55.511275 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-04 00:45:55.511293 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:01.422) 0:00:06.610 ******** 2026-04-04 00:45:55.511317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511397 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511475 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.511502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511589 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511869 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.511886 | orchestrator | 2026-04-04 00:45:55.511903 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-04 00:45:55.511920 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:04.613) 0:00:11.223 ******** 2026-04-04 00:45:55.511938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.511956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.511983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.511995 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:45:55.512006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512030 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512041 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512051 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:55.512061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512175 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:45:55.512185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512211 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:45:55.512228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512249 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:45:55.512272 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:45:55.512285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512340 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:45:55.512353 | orchestrator | 2026-04-04 00:45:55.512368 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-04 00:45:55.512382 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:01.935) 0:00:13.158 ******** 2026-04-04 00:45:55.512396 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512411 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512428 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512436 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:55.512444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512572 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:45:55.512580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512611 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:45:55.512619 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:45:55.512627 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:45:55.512635 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:45:55.512643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-04 00:45:55.512651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.512668 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:45:55.512676 | orchestrator | 2026-04-04 00:45:55.512736 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-04 00:45:55.512749 | orchestrator | Saturday 04 April 2026 00:43:57 +0000 (0:00:02.317) 0:00:15.476 ******** 2026-04-04 00:45:55.512756 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:55.512764 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:45:55.512772 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:45:55.512785 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:45:55.512793 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:45:55.512807 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:45:55.512816 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:45:55.512824 | orchestrator | 2026-04-04 00:45:55.512832 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-04 00:45:55.512840 | orchestrator | Saturday 04 April 2026 00:43:58 +0000 (0:00:00.987) 0:00:16.463 ******** 2026-04-04 00:45:55.512848 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:45:55.512859 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:45:55.512868 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:45:55.512881 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:45:55.512889 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:45:55.512897 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:45:55.512904 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:45:55.512912 | orchestrator | 2026-04-04 00:45:55.512920 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-04 00:45:55.512928 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.924) 0:00:17.388 ******** 2026-04-04 00:45:55.512937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.512945 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.512954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.512963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.512971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.512979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.512992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513010 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513052 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513082 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513099 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513108 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513141 | orchestrator | 2026-04-04 00:45:55.513158 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-04 00:45:55.513166 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:07.237) 0:00:24.625 ******** 2026-04-04 00:45:55.513174 | orchestrator | [WARNING]: Skipped 2026-04-04 00:45:55.513187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-04 00:45:55.513196 | orchestrator | to this access issue: 2026-04-04 00:45:55.513204 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-04 00:45:55.513212 | orchestrator | directory 2026-04-04 00:45:55.513220 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:45:55.513227 | orchestrator | 2026-04-04 00:45:55.513236 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-04 00:45:55.513244 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:01.215) 0:00:25.841 ******** 2026-04-04 00:45:55.513251 | orchestrator | [WARNING]: Skipped 2026-04-04 00:45:55.513260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-04 00:45:55.513273 | orchestrator | to this access issue: 2026-04-04 00:45:55.513282 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-04 00:45:55.513290 | orchestrator | directory 2026-04-04 00:45:55.513298 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:45:55.513305 | orchestrator | 2026-04-04 00:45:55.513314 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-04 00:45:55.513328 | orchestrator | Saturday 04 April 2026 00:44:09 +0000 (0:00:01.110) 0:00:26.951 ******** 2026-04-04 00:45:55.513336 | orchestrator | [WARNING]: Skipped 2026-04-04 00:45:55.513345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-04 00:45:55.513353 | orchestrator | to this access issue: 2026-04-04 00:45:55.513361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-04 00:45:55.513369 | orchestrator | directory 2026-04-04 00:45:55.513377 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:45:55.513385 | orchestrator | 2026-04-04 00:45:55.513393 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-04 00:45:55.513401 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.862) 0:00:27.813 ******** 2026-04-04 00:45:55.513409 | orchestrator | [WARNING]: Skipped 2026-04-04 00:45:55.513417 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-04 00:45:55.513425 | orchestrator | to this access issue: 2026-04-04 00:45:55.513433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-04 00:45:55.513441 | orchestrator | directory 2026-04-04 00:45:55.513449 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 00:45:55.513456 | orchestrator | 2026-04-04 00:45:55.513464 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-04 00:45:55.513473 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:01.142) 0:00:28.955 ******** 2026-04-04 00:45:55.513480 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.513488 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.513496 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.513504 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.513511 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.513519 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.513527 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.513535 | orchestrator | 2026-04-04 00:45:55.513543 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-04 00:45:55.513551 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:03.797) 0:00:32.753 ******** 2026-04-04 00:45:55.513559 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513575 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513595 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513603 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513611 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-04 00:45:55.513619 | orchestrator | 2026-04-04 00:45:55.513627 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-04 00:45:55.513635 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:02.775) 0:00:35.529 ******** 2026-04-04 00:45:55.513643 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.513651 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.513658 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.513667 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.513679 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.513714 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.513732 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.513743 | orchestrator | 2026-04-04 00:45:55.513755 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-04 00:45:55.513768 | orchestrator | Saturday 04 April 2026 00:44:21 +0000 (0:00:03.998) 0:00:39.527 ******** 2026-04-04 00:45:55.513782 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.513824 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.513851 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513874 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.513904 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.513939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.513959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.513989 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.514067 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.514083 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:45:55.514114 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514123 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514136 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514144 | orchestrator | 2026-04-04 00:45:55.514152 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-04 00:45:55.514160 | orchestrator | Saturday 04 April 2026 00:44:25 +0000 (0:00:03.492) 0:00:43.020 ******** 2026-04-04 00:45:55.514168 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514176 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514184 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514200 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514208 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514216 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-04 00:45:55.514224 | orchestrator | 2026-04-04 00:45:55.514232 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-04 00:45:55.514240 | orchestrator | Saturday 04 April 2026 00:44:28 +0000 (0:00:02.951) 0:00:45.972 ******** 2026-04-04 00:45:55.514248 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514256 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514263 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514271 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514279 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514287 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514295 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-04 00:45:55.514303 | orchestrator | 2026-04-04 00:45:55.514310 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-04 00:45:55.514318 | orchestrator | Saturday 04 April 2026 00:44:30 +0000 (0:00:02.032) 0:00:48.004 ******** 2026-04-04 00:45:55.514326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514364 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-04 00:45:55.514430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514443 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514514 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514539 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:45:55.514547 | orchestrator | 2026-04-04 00:45:55.514555 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-04 00:45:55.514563 | orchestrator | Saturday 04 April 2026 00:44:33 +0000 (0:00:03.307) 0:00:51.312 ******** 2026-04-04 00:45:55.514572 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.514580 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.514588 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.514596 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.514603 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.514611 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.514619 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.514627 | orchestrator | 2026-04-04 00:45:55.514635 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-04 00:45:55.514643 | orchestrator | Saturday 04 April 2026 00:44:35 +0000 (0:00:01.842) 0:00:53.154 ******** 2026-04-04 00:45:55.514651 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.514661 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.514676 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.514702 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.514710 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.514718 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.514726 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.514735 | orchestrator | 2026-04-04 00:45:55.514742 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514750 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:01.810) 0:00:54.965 ******** 2026-04-04 00:45:55.514758 | orchestrator | 2026-04-04 00:45:55.514766 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514774 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.064) 0:00:55.029 ******** 2026-04-04 00:45:55.514787 | orchestrator | 2026-04-04 00:45:55.514795 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514803 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.060) 0:00:55.090 ******** 2026-04-04 00:45:55.514811 | orchestrator | 2026-04-04 00:45:55.514819 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514827 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.061) 0:00:55.152 ******** 2026-04-04 00:45:55.514835 | orchestrator | 2026-04-04 00:45:55.514842 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514850 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.063) 0:00:55.216 ******** 2026-04-04 00:45:55.514858 | orchestrator | 2026-04-04 00:45:55.514866 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514874 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.060) 0:00:55.276 ******** 2026-04-04 00:45:55.514882 | orchestrator | 2026-04-04 00:45:55.514890 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-04 00:45:55.514898 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.059) 0:00:55.336 ******** 2026-04-04 00:45:55.514905 | orchestrator | 2026-04-04 00:45:55.514914 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-04 00:45:55.514927 | orchestrator | Saturday 04 April 2026 00:44:37 +0000 (0:00:00.081) 0:00:55.417 ******** 2026-04-04 00:45:55.514935 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.514943 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.514951 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.514958 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.514966 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.514974 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.514982 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.514990 | orchestrator | 2026-04-04 00:45:55.515004 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-04 00:45:55.515013 | orchestrator | Saturday 04 April 2026 00:45:09 +0000 (0:00:32.099) 0:01:27.516 ******** 2026-04-04 00:45:55.515024 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.515038 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.515052 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.515065 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.515079 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.515092 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.515104 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.515117 | orchestrator | 2026-04-04 00:45:55.515130 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-04 00:45:55.515144 | orchestrator | Saturday 04 April 2026 00:45:48 +0000 (0:00:38.485) 0:02:06.002 ******** 2026-04-04 00:45:55.515157 | orchestrator | ok: [testbed-manager] 2026-04-04 00:45:55.515172 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:45:55.515187 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:45:55.515199 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:45:55.515208 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:45:55.515216 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:45:55.515224 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:45:55.515232 | orchestrator | 2026-04-04 00:45:55.515240 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-04 00:45:55.515298 | orchestrator | Saturday 04 April 2026 00:45:50 +0000 (0:00:02.167) 0:02:08.169 ******** 2026-04-04 00:45:55.515307 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:45:55.515315 | orchestrator | changed: [testbed-manager] 2026-04-04 00:45:55.515323 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:45:55.515331 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:45:55.515338 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:45:55.515346 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:45:55.515354 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:45:55.515370 | orchestrator | 2026-04-04 00:45:55.515378 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:45:55.515386 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515394 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515403 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515411 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515419 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515426 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515435 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 00:45:55.515442 | orchestrator | 2026-04-04 00:45:55.515451 | orchestrator | 2026-04-04 00:45:55.515459 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:45:55.515466 | orchestrator | Saturday 04 April 2026 00:45:54 +0000 (0:00:04.082) 0:02:12.252 ******** 2026-04-04 00:45:55.515474 | orchestrator | =============================================================================== 2026-04-04 00:45:55.515482 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 38.49s 2026-04-04 00:45:55.515490 | orchestrator | common : Restart fluentd container ------------------------------------- 32.10s 2026-04-04 00:45:55.515498 | orchestrator | common : Copying over config.json files for services -------------------- 7.24s 2026-04-04 00:45:55.515506 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.61s 2026-04-04 00:45:55.515514 | orchestrator | common : Restart cron container ----------------------------------------- 4.08s 2026-04-04 00:45:55.515522 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.00s 2026-04-04 00:45:55.515529 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.80s 2026-04-04 00:45:55.515537 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.62s 2026-04-04 00:45:55.515545 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.49s 2026-04-04 00:45:55.515553 | orchestrator | common : Check common containers ---------------------------------------- 3.31s 2026-04-04 00:45:55.515561 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.95s 2026-04-04 00:45:55.515569 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.78s 2026-04-04 00:45:55.515577 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.32s 2026-04-04 00:45:55.515585 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.17s 2026-04-04 00:45:55.515601 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.03s 2026-04-04 00:45:55.515609 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.94s 2026-04-04 00:45:55.515617 | orchestrator | common : Creating log volume -------------------------------------------- 1.84s 2026-04-04 00:45:55.515631 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.81s 2026-04-04 00:45:55.515639 | orchestrator | common : include_tasks -------------------------------------------------- 1.42s 2026-04-04 00:45:55.515647 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.22s 2026-04-04 00:45:55.515660 | orchestrator | 2026-04-04 00:45:55 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:55.515668 | orchestrator | 2026-04-04 00:45:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:45:58.531623 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:45:58.532748 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:45:58.533620 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task 60a6bca9-8df7-44de-a08c-df8f3f9067cc is in state STARTED 2026-04-04 00:45:58.534460 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:45:58.535259 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:45:58.535968 | orchestrator | 2026-04-04 00:45:58 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:45:58.536016 | orchestrator | 2026-04-04 00:45:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:01.562271 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:01.562728 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:01.563670 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task 60a6bca9-8df7-44de-a08c-df8f3f9067cc is in state STARTED 2026-04-04 00:46:01.564501 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:01.565173 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:01.565922 | orchestrator | 2026-04-04 00:46:01 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:01.566274 | orchestrator | 2026-04-04 00:46:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:04.597080 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:04.598428 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:04.598923 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task 60a6bca9-8df7-44de-a08c-df8f3f9067cc is in state STARTED 2026-04-04 00:46:04.600364 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:04.602079 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:04.602938 | orchestrator | 2026-04-04 00:46:04 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:04.603494 | orchestrator | 2026-04-04 00:46:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:07.642115 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:07.644084 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:07.645779 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task 60a6bca9-8df7-44de-a08c-df8f3f9067cc is in state STARTED 2026-04-04 00:46:07.648245 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:07.650389 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:07.652829 | orchestrator | 2026-04-04 00:46:07 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:07.653352 | orchestrator | 2026-04-04 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:10.687046 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:10.687126 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:10.687136 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 60a6bca9-8df7-44de-a08c-df8f3f9067cc is in state SUCCESS 2026-04-04 00:46:10.690182 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:10.690335 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:10.691361 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:10.692025 | orchestrator | 2026-04-04 00:46:10 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:10.692052 | orchestrator | 2026-04-04 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:13.722002 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:13.722970 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:13.723820 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:13.725441 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:13.726173 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:13.726996 | orchestrator | 2026-04-04 00:46:13 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:13.727093 | orchestrator | 2026-04-04 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:16.754405 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:16.754797 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:16.755471 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state STARTED 2026-04-04 00:46:16.756146 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:16.756898 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:16.757470 | orchestrator | 2026-04-04 00:46:16 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:16.757496 | orchestrator | 2026-04-04 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:19.792370 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:19.792572 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:19.793464 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 588fedad-1630-4cf7-b09d-5af4fe51d628 is in state SUCCESS 2026-04-04 00:46:19.794335 | orchestrator | 2026-04-04 00:46:19.794366 | orchestrator | 2026-04-04 00:46:19.794373 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:46:19.794379 | orchestrator | 2026-04-04 00:46:19.794408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:46:19.794417 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.424) 0:00:00.424 ******** 2026-04-04 00:46:19.794431 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:46:19.794441 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:46:19.794452 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:46:19.794461 | orchestrator | 2026-04-04 00:46:19.794469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:46:19.794477 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.257) 0:00:00.682 ******** 2026-04-04 00:46:19.794486 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-04 00:46:19.794495 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-04 00:46:19.794504 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-04 00:46:19.794512 | orchestrator | 2026-04-04 00:46:19.794520 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-04 00:46:19.794528 | orchestrator | 2026-04-04 00:46:19.794536 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-04 00:46:19.794545 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.625) 0:00:01.308 ******** 2026-04-04 00:46:19.794554 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:46:19.794564 | orchestrator | 2026-04-04 00:46:19.794572 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-04 00:46:19.794581 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.457) 0:00:01.765 ******** 2026-04-04 00:46:19.794590 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-04 00:46:19.794598 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-04 00:46:19.794606 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-04 00:46:19.794613 | orchestrator | 2026-04-04 00:46:19.794621 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-04 00:46:19.794629 | orchestrator | Saturday 04 April 2026 00:46:01 +0000 (0:00:01.638) 0:00:03.404 ******** 2026-04-04 00:46:19.794651 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-04 00:46:19.794660 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-04 00:46:19.794667 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-04 00:46:19.794701 | orchestrator | 2026-04-04 00:46:19.794709 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-04 00:46:19.794716 | orchestrator | Saturday 04 April 2026 00:46:02 +0000 (0:00:01.396) 0:00:04.800 ******** 2026-04-04 00:46:19.794726 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:19.794733 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:19.794741 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:19.794749 | orchestrator | 2026-04-04 00:46:19.794757 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-04 00:46:19.794765 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:01.747) 0:00:06.548 ******** 2026-04-04 00:46:19.794774 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:19.794782 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:19.794790 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:19.794798 | orchestrator | 2026-04-04 00:46:19.794807 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:46:19.794815 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.794827 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.794833 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.794838 | orchestrator | 2026-04-04 00:46:19.794851 | orchestrator | 2026-04-04 00:46:19.794856 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:46:19.794861 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:03.032) 0:00:09.580 ******** 2026-04-04 00:46:19.794866 | orchestrator | =============================================================================== 2026-04-04 00:46:19.794871 | orchestrator | memcached : Restart memcached container --------------------------------- 3.03s 2026-04-04 00:46:19.794876 | orchestrator | memcached : Check memcached container ----------------------------------- 1.75s 2026-04-04 00:46:19.794881 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.64s 2026-04-04 00:46:19.794886 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.40s 2026-04-04 00:46:19.794891 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-04-04 00:46:19.794896 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.46s 2026-04-04 00:46:19.794901 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-04-04 00:46:19.794906 | orchestrator | 2026-04-04 00:46:19.794911 | orchestrator | 2026-04-04 00:46:19.794916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:46:19.794921 | orchestrator | 2026-04-04 00:46:19.794926 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:46:19.794931 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.349) 0:00:00.349 ******** 2026-04-04 00:46:19.794936 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:46:19.794942 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:46:19.794948 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:46:19.794953 | orchestrator | 2026-04-04 00:46:19.794960 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:46:19.794976 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.325) 0:00:00.674 ******** 2026-04-04 00:46:19.794982 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-04 00:46:19.794988 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-04 00:46:19.794994 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-04 00:46:19.795000 | orchestrator | 2026-04-04 00:46:19.795006 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-04 00:46:19.795012 | orchestrator | 2026-04-04 00:46:19.795017 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-04 00:46:19.795023 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.434) 0:00:01.108 ******** 2026-04-04 00:46:19.795029 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:46:19.795035 | orchestrator | 2026-04-04 00:46:19.795041 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-04 00:46:19.795047 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.804) 0:00:01.913 ******** 2026-04-04 00:46:19.795056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795113 | orchestrator | 2026-04-04 00:46:19.795119 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-04 00:46:19.795125 | orchestrator | Saturday 04 April 2026 00:46:02 +0000 (0:00:02.108) 0:00:04.022 ******** 2026-04-04 00:46:19.795131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795178 | orchestrator | 2026-04-04 00:46:19.795184 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-04 00:46:19.795190 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:02.588) 0:00:06.610 ******** 2026-04-04 00:46:19.795199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795267 | orchestrator | 2026-04-04 00:46:19.795280 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-04 00:46:19.795288 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:02.856) 0:00:09.467 ******** 2026-04-04 00:46:19.795296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-04 00:46:19.795359 | orchestrator | 2026-04-04 00:46:19.795368 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:46:19.795377 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:01.788) 0:00:11.256 ******** 2026-04-04 00:46:19.795385 | orchestrator | 2026-04-04 00:46:19.795393 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:46:19.795406 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:00.220) 0:00:11.477 ******** 2026-04-04 00:46:19.795414 | orchestrator | 2026-04-04 00:46:19.795422 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-04 00:46:19.795429 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:00.071) 0:00:11.548 ******** 2026-04-04 00:46:19.795437 | orchestrator | 2026-04-04 00:46:19.795445 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-04 00:46:19.795452 | orchestrator | Saturday 04 April 2026 00:46:09 +0000 (0:00:00.069) 0:00:11.618 ******** 2026-04-04 00:46:19.795460 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:19.795470 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:19.795486 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:19.795493 | orchestrator | 2026-04-04 00:46:19.795501 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-04 00:46:19.795509 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:03.783) 0:00:15.401 ******** 2026-04-04 00:46:19.795517 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:19.795525 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:19.795533 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:19.795542 | orchestrator | 2026-04-04 00:46:19.795551 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:46:19.795559 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.795568 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.795577 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:46:19.795585 | orchestrator | 2026-04-04 00:46:19.795593 | orchestrator | 2026-04-04 00:46:19.795602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:46:19.795610 | orchestrator | Saturday 04 April 2026 00:46:16 +0000 (0:00:02.839) 0:00:18.241 ******** 2026-04-04 00:46:19.795619 | orchestrator | =============================================================================== 2026-04-04 00:46:19.795633 | orchestrator | redis : Restart redis container ----------------------------------------- 3.78s 2026-04-04 00:46:19.795642 | orchestrator | redis : Copying over redis config files --------------------------------- 2.86s 2026-04-04 00:46:19.795648 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 2.84s 2026-04-04 00:46:19.795653 | orchestrator | redis : Copying over default config.json files -------------------------- 2.59s 2026-04-04 00:46:19.795659 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.11s 2026-04-04 00:46:19.795664 | orchestrator | redis : Check redis containers ------------------------------------------ 1.79s 2026-04-04 00:46:19.795688 | orchestrator | redis : include_tasks --------------------------------------------------- 0.80s 2026-04-04 00:46:19.795694 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-04-04 00:46:19.795699 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.36s 2026-04-04 00:46:19.795704 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-04-04 00:46:19.797037 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:19.797883 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:19.798650 | orchestrator | 2026-04-04 00:46:19 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:19.798690 | orchestrator | 2026-04-04 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:22.836657 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:22.841018 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:22.844110 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:22.845362 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:22.846468 | orchestrator | 2026-04-04 00:46:22 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:22.846510 | orchestrator | 2026-04-04 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:25.903542 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:25.905581 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:25.907854 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:25.908736 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:25.909757 | orchestrator | 2026-04-04 00:46:25 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:25.909788 | orchestrator | 2026-04-04 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:28.956606 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:28.958899 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:28.960556 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:28.961074 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:28.962904 | orchestrator | 2026-04-04 00:46:28 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:28.962997 | orchestrator | 2026-04-04 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:32.046519 | orchestrator | 2026-04-04 00:46:31 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:32.046620 | orchestrator | 2026-04-04 00:46:32 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:32.046632 | orchestrator | 2026-04-04 00:46:32 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:32.046640 | orchestrator | 2026-04-04 00:46:32 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:32.046647 | orchestrator | 2026-04-04 00:46:32 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:32.046655 | orchestrator | 2026-04-04 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:35.039507 | orchestrator | 2026-04-04 00:46:35 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:35.039699 | orchestrator | 2026-04-04 00:46:35 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:35.040873 | orchestrator | 2026-04-04 00:46:35 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:35.041238 | orchestrator | 2026-04-04 00:46:35 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:35.042172 | orchestrator | 2026-04-04 00:46:35 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:35.042248 | orchestrator | 2026-04-04 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:38.069577 | orchestrator | 2026-04-04 00:46:38 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:38.069649 | orchestrator | 2026-04-04 00:46:38 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:38.069655 | orchestrator | 2026-04-04 00:46:38 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:38.069709 | orchestrator | 2026-04-04 00:46:38 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:38.069731 | orchestrator | 2026-04-04 00:46:38 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:38.069736 | orchestrator | 2026-04-04 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:41.125555 | orchestrator | 2026-04-04 00:46:41 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:41.126465 | orchestrator | 2026-04-04 00:46:41 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:41.127435 | orchestrator | 2026-04-04 00:46:41 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:41.128888 | orchestrator | 2026-04-04 00:46:41 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:41.132320 | orchestrator | 2026-04-04 00:46:41 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:41.132771 | orchestrator | 2026-04-04 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:44.170626 | orchestrator | 2026-04-04 00:46:44 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:44.170887 | orchestrator | 2026-04-04 00:46:44 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:44.170965 | orchestrator | 2026-04-04 00:46:44 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:44.172049 | orchestrator | 2026-04-04 00:46:44 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:44.173156 | orchestrator | 2026-04-04 00:46:44 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:44.173251 | orchestrator | 2026-04-04 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:47.257461 | orchestrator | 2026-04-04 00:46:47 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:47.257505 | orchestrator | 2026-04-04 00:46:47 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:47.257510 | orchestrator | 2026-04-04 00:46:47 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:47.257515 | orchestrator | 2026-04-04 00:46:47 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:47.257519 | orchestrator | 2026-04-04 00:46:47 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:47.257523 | orchestrator | 2026-04-04 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:50.410392 | orchestrator | 2026-04-04 00:46:50 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:50.411939 | orchestrator | 2026-04-04 00:46:50 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:50.412565 | orchestrator | 2026-04-04 00:46:50 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:50.413506 | orchestrator | 2026-04-04 00:46:50 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:50.416114 | orchestrator | 2026-04-04 00:46:50 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:50.416167 | orchestrator | 2026-04-04 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:53.463466 | orchestrator | 2026-04-04 00:46:53 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:53.466310 | orchestrator | 2026-04-04 00:46:53 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:53.468241 | orchestrator | 2026-04-04 00:46:53 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state STARTED 2026-04-04 00:46:53.468326 | orchestrator | 2026-04-04 00:46:53 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:53.468333 | orchestrator | 2026-04-04 00:46:53 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:53.468338 | orchestrator | 2026-04-04 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:56.542920 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:56.543005 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:56.543017 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task 505a8fba-1a9e-4eb1-bff3-c243c7358619 is in state SUCCESS 2026-04-04 00:46:56.544148 | orchestrator | 2026-04-04 00:46:56.544185 | orchestrator | 2026-04-04 00:46:56.544193 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:46:56.544205 | orchestrator | 2026-04-04 00:46:56.544213 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:46:56.544221 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.435) 0:00:00.435 ******** 2026-04-04 00:46:56.544228 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:46:56.544236 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:46:56.544243 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:46:56.544249 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:46:56.544255 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:46:56.544262 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:46:56.544269 | orchestrator | 2026-04-04 00:46:56.544276 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:46:56.544284 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.697) 0:00:01.132 ******** 2026-04-04 00:46:56.544291 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544297 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544304 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544310 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544317 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544323 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-04 00:46:56.544330 | orchestrator | 2026-04-04 00:46:56.544336 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-04 00:46:56.544342 | orchestrator | 2026-04-04 00:46:56.544348 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-04 00:46:56.544355 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:01.144) 0:00:02.277 ******** 2026-04-04 00:46:56.544363 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:46:56.544371 | orchestrator | 2026-04-04 00:46:56.544377 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 00:46:56.544384 | orchestrator | Saturday 04 April 2026 00:46:01 +0000 (0:00:01.097) 0:00:03.374 ******** 2026-04-04 00:46:56.544390 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-04 00:46:56.544397 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-04 00:46:56.544403 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-04 00:46:56.544409 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-04 00:46:56.544415 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-04 00:46:56.544422 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-04 00:46:56.544430 | orchestrator | 2026-04-04 00:46:56.544455 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 00:46:56.544463 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:01.553) 0:00:04.928 ******** 2026-04-04 00:46:56.544469 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-04 00:46:56.544475 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-04 00:46:56.544481 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-04 00:46:56.544487 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-04 00:46:56.544494 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-04 00:46:56.544501 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-04 00:46:56.544508 | orchestrator | 2026-04-04 00:46:56.544513 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 00:46:56.544520 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:01.556) 0:00:06.485 ******** 2026-04-04 00:46:56.544526 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-04 00:46:56.544533 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:46:56.544540 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-04 00:46:56.544546 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:46:56.544552 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-04 00:46:56.544559 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:46:56.544572 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-04 00:46:56.544578 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:46:56.544585 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-04 00:46:56.544591 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:46:56.544597 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-04 00:46:56.544604 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:46:56.544610 | orchestrator | 2026-04-04 00:46:56.544616 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-04 00:46:56.544622 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:01.309) 0:00:07.794 ******** 2026-04-04 00:46:56.544628 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:46:56.544635 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:46:56.544642 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:46:56.544694 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:46:56.544701 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:46:56.544708 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:46:56.544714 | orchestrator | 2026-04-04 00:46:56.544720 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-04 00:46:56.544726 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:00.680) 0:00:08.475 ******** 2026-04-04 00:46:56.544748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544850 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544929 | orchestrator | 2026-04-04 00:46:56.544937 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-04 00:46:56.544944 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:01.595) 0:00:10.070 ******** 2026-04-04 00:46:56.544952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.544990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545027 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545069 | orchestrator | 2026-04-04 00:46:56.545076 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-04 00:46:56.545083 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:03.010) 0:00:13.080 ******** 2026-04-04 00:46:56.545090 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:46:56.545097 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:46:56.545103 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:46:56.545111 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:46:56.545117 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:46:56.545124 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:46:56.545132 | orchestrator | 2026-04-04 00:46:56.545139 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-04 00:46:56.545146 | orchestrator | Saturday 04 April 2026 00:46:12 +0000 (0:00:00.954) 0:00:14.035 ******** 2026-04-04 00:46:56.545154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-04 00:46:56.545260 | orchestrator | 2026-04-04 00:46:56.545267 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545274 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:02.242) 0:00:16.277 ******** 2026-04-04 00:46:56.545281 | orchestrator | 2026-04-04 00:46:56.545288 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545295 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:00.167) 0:00:16.445 ******** 2026-04-04 00:46:56.545301 | orchestrator | 2026-04-04 00:46:56.545308 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545315 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.139) 0:00:16.584 ******** 2026-04-04 00:46:56.545322 | orchestrator | 2026-04-04 00:46:56.545336 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545343 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.137) 0:00:16.722 ******** 2026-04-04 00:46:56.545350 | orchestrator | 2026-04-04 00:46:56.545357 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545365 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.261) 0:00:16.983 ******** 2026-04-04 00:46:56.545371 | orchestrator | 2026-04-04 00:46:56.545379 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-04 00:46:56.545385 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.239) 0:00:17.223 ******** 2026-04-04 00:46:56.545392 | orchestrator | 2026-04-04 00:46:56.545398 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-04 00:46:56.545406 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.111) 0:00:17.334 ******** 2026-04-04 00:46:56.545412 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:56.545419 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:46:56.545426 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:56.545433 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:46:56.545440 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:46:56.545446 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:56.545453 | orchestrator | 2026-04-04 00:46:56.545460 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-04 00:46:56.545467 | orchestrator | Saturday 04 April 2026 00:46:24 +0000 (0:00:09.078) 0:00:26.412 ******** 2026-04-04 00:46:56.545474 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:46:56.545486 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:46:56.545492 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:46:56.545499 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:46:56.545506 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:46:56.545513 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:46:56.545519 | orchestrator | 2026-04-04 00:46:56.545526 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-04 00:46:56.545536 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:01.916) 0:00:28.329 ******** 2026-04-04 00:46:56.545543 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:56.545550 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:46:56.545556 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:56.545563 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:46:56.545570 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:46:56.545577 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:56.545583 | orchestrator | 2026-04-04 00:46:56.545590 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-04 00:46:56.545597 | orchestrator | Saturday 04 April 2026 00:46:31 +0000 (0:00:04.414) 0:00:32.744 ******** 2026-04-04 00:46:56.545604 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-04 00:46:56.545611 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-04 00:46:56.545617 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-04 00:46:56.545623 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-04 00:46:56.545629 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-04 00:46:56.545640 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-04 00:46:56.545667 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-04 00:46:56.545674 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-04 00:46:56.545680 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-04 00:46:56.545687 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-04 00:46:56.545692 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-04 00:46:56.545699 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-04 00:46:56.545705 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545711 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545717 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545723 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545729 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545735 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-04 00:46:56.545739 | orchestrator | 2026-04-04 00:46:56.545743 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-04 00:46:56.545747 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:07.821) 0:00:40.566 ******** 2026-04-04 00:46:56.545755 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-04 00:46:56.545759 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:46:56.545763 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-04 00:46:56.545766 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:46:56.545770 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-04 00:46:56.545774 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:46:56.545777 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-04 00:46:56.545781 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-04 00:46:56.545785 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-04 00:46:56.545789 | orchestrator | 2026-04-04 00:46:56.545792 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-04 00:46:56.545796 | orchestrator | Saturday 04 April 2026 00:46:41 +0000 (0:00:02.697) 0:00:43.263 ******** 2026-04-04 00:46:56.545800 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:46:56.545804 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:46:56.545807 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:46:56.545811 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:46:56.545815 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-04 00:46:56.545819 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:46:56.545822 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:46:56.545826 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:46:56.545830 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-04 00:46:56.545834 | orchestrator | 2026-04-04 00:46:56.545837 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-04 00:46:56.545841 | orchestrator | Saturday 04 April 2026 00:46:46 +0000 (0:00:04.902) 0:00:48.166 ******** 2026-04-04 00:46:56.545845 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:46:56.545848 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:46:56.545855 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:46:56.545859 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:46:56.545862 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:46:56.545866 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:46:56.545870 | orchestrator | 2026-04-04 00:46:56.545873 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:46:56.545878 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:46:56.545882 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:46:56.545886 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:46:56.545889 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:46:56.545893 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:46:56.545900 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:46:56.545904 | orchestrator | 2026-04-04 00:46:56.545907 | orchestrator | 2026-04-04 00:46:56.545911 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:46:56.545915 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:08.278) 0:00:56.444 ******** 2026-04-04 00:46:56.545919 | orchestrator | =============================================================================== 2026-04-04 00:46:56.545928 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 12.69s 2026-04-04 00:46:56.545932 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.08s 2026-04-04 00:46:56.545936 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.82s 2026-04-04 00:46:56.545939 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.90s 2026-04-04 00:46:56.545943 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.01s 2026-04-04 00:46:56.545947 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.70s 2026-04-04 00:46:56.545950 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.24s 2026-04-04 00:46:56.545954 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.92s 2026-04-04 00:46:56.545958 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.60s 2026-04-04 00:46:56.545961 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.56s 2026-04-04 00:46:56.545965 | orchestrator | module-load : Load modules ---------------------------------------------- 1.55s 2026-04-04 00:46:56.545969 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.31s 2026-04-04 00:46:56.545972 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2026-04-04 00:46:56.545976 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.10s 2026-04-04 00:46:56.545980 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.06s 2026-04-04 00:46:56.545983 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.95s 2026-04-04 00:46:56.545987 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2026-04-04 00:46:56.545991 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.68s 2026-04-04 00:46:56.545995 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:56.545998 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:46:56.546105 | orchestrator | 2026-04-04 00:46:56 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:56.546112 | orchestrator | 2026-04-04 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:46:59.636969 | orchestrator | 2026-04-04 00:46:59 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:46:59.647018 | orchestrator | 2026-04-04 00:46:59 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:46:59.651477 | orchestrator | 2026-04-04 00:46:59 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:46:59.656482 | orchestrator | 2026-04-04 00:46:59 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:46:59.657298 | orchestrator | 2026-04-04 00:46:59 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:46:59.657352 | orchestrator | 2026-04-04 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:02.693490 | orchestrator | 2026-04-04 00:47:02 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:02.695102 | orchestrator | 2026-04-04 00:47:02 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:02.696183 | orchestrator | 2026-04-04 00:47:02 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:02.698163 | orchestrator | 2026-04-04 00:47:02 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:02.699018 | orchestrator | 2026-04-04 00:47:02 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:02.699098 | orchestrator | 2026-04-04 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:05.725312 | orchestrator | 2026-04-04 00:47:05 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:05.725854 | orchestrator | 2026-04-04 00:47:05 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:05.726575 | orchestrator | 2026-04-04 00:47:05 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:05.727560 | orchestrator | 2026-04-04 00:47:05 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:05.728517 | orchestrator | 2026-04-04 00:47:05 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:05.728546 | orchestrator | 2026-04-04 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:08.755517 | orchestrator | 2026-04-04 00:47:08 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:08.756454 | orchestrator | 2026-04-04 00:47:08 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:08.757527 | orchestrator | 2026-04-04 00:47:08 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:08.758354 | orchestrator | 2026-04-04 00:47:08 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:08.759272 | orchestrator | 2026-04-04 00:47:08 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:08.759334 | orchestrator | 2026-04-04 00:47:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:11.786044 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:11.786900 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:11.788569 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:11.789258 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:11.790413 | orchestrator | 2026-04-04 00:47:11 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:11.790444 | orchestrator | 2026-04-04 00:47:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:14.833256 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:14.833950 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:14.834518 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:14.837545 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:14.838071 | orchestrator | 2026-04-04 00:47:14 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:14.838126 | orchestrator | 2026-04-04 00:47:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:17.869306 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:17.869375 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:17.874005 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:17.874272 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:17.874815 | orchestrator | 2026-04-04 00:47:17 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:17.874835 | orchestrator | 2026-04-04 00:47:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:20.916467 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:20.917079 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:20.917760 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:20.918489 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:20.919349 | orchestrator | 2026-04-04 00:47:20 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:20.919378 | orchestrator | 2026-04-04 00:47:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:23.959603 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:23.960134 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:23.961158 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:23.963219 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:23.964188 | orchestrator | 2026-04-04 00:47:23 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:23.965147 | orchestrator | 2026-04-04 00:47:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:27.007590 | orchestrator | 2026-04-04 00:47:27 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:27.009350 | orchestrator | 2026-04-04 00:47:27 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:27.010522 | orchestrator | 2026-04-04 00:47:27 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:27.012369 | orchestrator | 2026-04-04 00:47:27 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:27.013697 | orchestrator | 2026-04-04 00:47:27 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:27.014680 | orchestrator | 2026-04-04 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:30.048329 | orchestrator | 2026-04-04 00:47:30 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:30.051599 | orchestrator | 2026-04-04 00:47:30 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:30.054575 | orchestrator | 2026-04-04 00:47:30 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:30.055377 | orchestrator | 2026-04-04 00:47:30 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:30.056414 | orchestrator | 2026-04-04 00:47:30 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:30.056456 | orchestrator | 2026-04-04 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:33.099346 | orchestrator | 2026-04-04 00:47:33 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:33.099913 | orchestrator | 2026-04-04 00:47:33 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:33.102059 | orchestrator | 2026-04-04 00:47:33 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:33.102886 | orchestrator | 2026-04-04 00:47:33 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:33.105154 | orchestrator | 2026-04-04 00:47:33 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:33.105195 | orchestrator | 2026-04-04 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:36.139945 | orchestrator | 2026-04-04 00:47:36 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:36.141496 | orchestrator | 2026-04-04 00:47:36 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:36.142876 | orchestrator | 2026-04-04 00:47:36 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:36.144245 | orchestrator | 2026-04-04 00:47:36 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:36.145344 | orchestrator | 2026-04-04 00:47:36 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:36.145383 | orchestrator | 2026-04-04 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:39.177441 | orchestrator | 2026-04-04 00:47:39 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:39.177972 | orchestrator | 2026-04-04 00:47:39 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:39.178550 | orchestrator | 2026-04-04 00:47:39 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:39.180586 | orchestrator | 2026-04-04 00:47:39 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:39.181403 | orchestrator | 2026-04-04 00:47:39 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:39.181440 | orchestrator | 2026-04-04 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:42.221881 | orchestrator | 2026-04-04 00:47:42 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:42.223327 | orchestrator | 2026-04-04 00:47:42 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:42.225305 | orchestrator | 2026-04-04 00:47:42 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:42.228356 | orchestrator | 2026-04-04 00:47:42 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:42.230122 | orchestrator | 2026-04-04 00:47:42 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:42.230410 | orchestrator | 2026-04-04 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:45.263541 | orchestrator | 2026-04-04 00:47:45 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:45.275422 | orchestrator | 2026-04-04 00:47:45 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:45.275486 | orchestrator | 2026-04-04 00:47:45 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:45.278384 | orchestrator | 2026-04-04 00:47:45 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:45.279714 | orchestrator | 2026-04-04 00:47:45 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:45.279773 | orchestrator | 2026-04-04 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:48.317040 | orchestrator | 2026-04-04 00:47:48 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:48.317349 | orchestrator | 2026-04-04 00:47:48 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:48.319505 | orchestrator | 2026-04-04 00:47:48 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:48.321353 | orchestrator | 2026-04-04 00:47:48 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:48.322452 | orchestrator | 2026-04-04 00:47:48 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:48.322500 | orchestrator | 2026-04-04 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:51.359068 | orchestrator | 2026-04-04 00:47:51 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:51.359407 | orchestrator | 2026-04-04 00:47:51 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:51.360099 | orchestrator | 2026-04-04 00:47:51 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:51.360942 | orchestrator | 2026-04-04 00:47:51 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:51.361678 | orchestrator | 2026-04-04 00:47:51 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:51.361756 | orchestrator | 2026-04-04 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:54.407271 | orchestrator | 2026-04-04 00:47:54 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:54.408826 | orchestrator | 2026-04-04 00:47:54 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:54.411139 | orchestrator | 2026-04-04 00:47:54 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:54.413411 | orchestrator | 2026-04-04 00:47:54 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:54.415001 | orchestrator | 2026-04-04 00:47:54 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:54.415193 | orchestrator | 2026-04-04 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:47:57.447223 | orchestrator | 2026-04-04 00:47:57 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:47:57.448530 | orchestrator | 2026-04-04 00:47:57 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:47:57.449335 | orchestrator | 2026-04-04 00:47:57 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:47:57.450221 | orchestrator | 2026-04-04 00:47:57 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:47:57.450824 | orchestrator | 2026-04-04 00:47:57 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:47:57.451660 | orchestrator | 2026-04-04 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:00.517262 | orchestrator | 2026-04-04 00:48:00 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:00.518150 | orchestrator | 2026-04-04 00:48:00 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:00.519341 | orchestrator | 2026-04-04 00:48:00 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:00.521346 | orchestrator | 2026-04-04 00:48:00 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:00.523679 | orchestrator | 2026-04-04 00:48:00 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:48:00.523746 | orchestrator | 2026-04-04 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:03.637586 | orchestrator | 2026-04-04 00:48:03 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:03.638320 | orchestrator | 2026-04-04 00:48:03 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:03.639376 | orchestrator | 2026-04-04 00:48:03 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:03.640407 | orchestrator | 2026-04-04 00:48:03 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:03.641618 | orchestrator | 2026-04-04 00:48:03 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:48:03.641637 | orchestrator | 2026-04-04 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:06.675193 | orchestrator | 2026-04-04 00:48:06 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:06.676819 | orchestrator | 2026-04-04 00:48:06 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:06.678287 | orchestrator | 2026-04-04 00:48:06 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:06.678984 | orchestrator | 2026-04-04 00:48:06 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:06.679525 | orchestrator | 2026-04-04 00:48:06 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:48:06.679548 | orchestrator | 2026-04-04 00:48:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:09.715892 | orchestrator | 2026-04-04 00:48:09 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:09.716189 | orchestrator | 2026-04-04 00:48:09 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:09.716896 | orchestrator | 2026-04-04 00:48:09 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:09.717546 | orchestrator | 2026-04-04 00:48:09 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:09.718267 | orchestrator | 2026-04-04 00:48:09 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:48:09.718305 | orchestrator | 2026-04-04 00:48:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:12.751298 | orchestrator | 2026-04-04 00:48:12 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:12.752847 | orchestrator | 2026-04-04 00:48:12 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:12.755357 | orchestrator | 2026-04-04 00:48:12 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:12.757160 | orchestrator | 2026-04-04 00:48:12 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:12.758746 | orchestrator | 2026-04-04 00:48:12 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state STARTED 2026-04-04 00:48:12.758974 | orchestrator | 2026-04-04 00:48:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:15.785414 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:15.785779 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task e8d28cd4-af22-4fd8-8ec4-a1df31f3d769 is in state STARTED 2026-04-04 00:48:15.786335 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:15.787096 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:15.787573 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:15.790259 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task 34e7e9dc-9f06-4b5c-af39-dd6db7ed7c4b is in state STARTED 2026-04-04 00:48:15.791962 | orchestrator | 2026-04-04 00:48:15 | INFO  | Task 1e5ad01f-6ad8-42bd-a1c4-2cf3a2cc2f01 is in state SUCCESS 2026-04-04 00:48:15.792621 | orchestrator | 2026-04-04 00:48:15.793903 | orchestrator | 2026-04-04 00:48:15.793931 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-04 00:48:15.793938 | orchestrator | 2026-04-04 00:48:15.793960 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-04 00:48:15.793966 | orchestrator | Saturday 04 April 2026 00:43:43 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-04-04 00:48:15.793972 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.793979 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.793986 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.793993 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794000 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794006 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794047 | orchestrator | 2026-04-04 00:48:15.794058 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-04 00:48:15.794065 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.603) 0:00:00.881 ******** 2026-04-04 00:48:15.794072 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794080 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794087 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794093 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794097 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794100 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794104 | orchestrator | 2026-04-04 00:48:15.794108 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-04 00:48:15.794112 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.656) 0:00:01.538 ******** 2026-04-04 00:48:15.794116 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794120 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794124 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794127 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794131 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794135 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794138 | orchestrator | 2026-04-04 00:48:15.794142 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-04 00:48:15.794146 | orchestrator | Saturday 04 April 2026 00:43:45 +0000 (0:00:00.578) 0:00:02.117 ******** 2026-04-04 00:48:15.794150 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.794153 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.794157 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.794161 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794165 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794168 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794172 | orchestrator | 2026-04-04 00:48:15.794176 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-04 00:48:15.794180 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:03.421) 0:00:05.538 ******** 2026-04-04 00:48:15.794184 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.794188 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.794192 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.794195 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794199 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794203 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794216 | orchestrator | 2026-04-04 00:48:15.794220 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-04 00:48:15.794224 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:01.025) 0:00:06.564 ******** 2026-04-04 00:48:15.794227 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.794231 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.794235 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.794239 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794242 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794246 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794250 | orchestrator | 2026-04-04 00:48:15.794254 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-04 00:48:15.794257 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:01.259) 0:00:07.823 ******** 2026-04-04 00:48:15.794261 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794265 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794269 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794272 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794276 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794280 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794283 | orchestrator | 2026-04-04 00:48:15.794287 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-04 00:48:15.794291 | orchestrator | Saturday 04 April 2026 00:43:52 +0000 (0:00:01.067) 0:00:08.890 ******** 2026-04-04 00:48:15.794295 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794298 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794306 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794310 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794314 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794317 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794321 | orchestrator | 2026-04-04 00:48:15.794325 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-04 00:48:15.794329 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:01.034) 0:00:09.925 ******** 2026-04-04 00:48:15.794333 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794336 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794340 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794344 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794348 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794352 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794356 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794359 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794363 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794367 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794377 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794381 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794385 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794389 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794392 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794396 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 00:48:15.794400 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 00:48:15.794404 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794407 | orchestrator | 2026-04-04 00:48:15.794411 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-04 00:48:15.794418 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:01.241) 0:00:11.166 ******** 2026-04-04 00:48:15.794422 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794426 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794429 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794433 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794437 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794440 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794444 | orchestrator | 2026-04-04 00:48:15.794448 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-04 00:48:15.794452 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:01.507) 0:00:12.674 ******** 2026-04-04 00:48:15.794456 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.794460 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.794464 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.794467 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794471 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794475 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794478 | orchestrator | 2026-04-04 00:48:15.794482 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-04 00:48:15.794486 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:00.928) 0:00:13.603 ******** 2026-04-04 00:48:15.794489 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.794493 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794497 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794501 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.794504 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.794508 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794512 | orchestrator | 2026-04-04 00:48:15.794515 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-04 00:48:15.794519 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:04.864) 0:00:18.467 ******** 2026-04-04 00:48:15.794523 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794527 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794530 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794534 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794538 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794542 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794545 | orchestrator | 2026-04-04 00:48:15.794549 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-04 00:48:15.794553 | orchestrator | Saturday 04 April 2026 00:44:03 +0000 (0:00:01.808) 0:00:20.276 ******** 2026-04-04 00:48:15.794556 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794560 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794564 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794567 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794571 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794575 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794586 | orchestrator | 2026-04-04 00:48:15.794590 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-04 00:48:15.794631 | orchestrator | Saturday 04 April 2026 00:44:05 +0000 (0:00:01.978) 0:00:22.254 ******** 2026-04-04 00:48:15.794636 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794639 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794643 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794647 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794651 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794654 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794658 | orchestrator | 2026-04-04 00:48:15.794662 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-04 00:48:15.794668 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:00.889) 0:00:23.144 ******** 2026-04-04 00:48:15.794675 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-04 00:48:15.794679 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-04 00:48:15.794682 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794686 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-04 00:48:15.794690 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-04 00:48:15.794693 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794697 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-04 00:48:15.794701 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-04 00:48:15.794704 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794708 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-04 00:48:15.794712 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-04 00:48:15.794715 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794719 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-04 00:48:15.794723 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-04 00:48:15.794727 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794730 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-04 00:48:15.794734 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-04 00:48:15.794738 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794741 | orchestrator | 2026-04-04 00:48:15.794745 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-04 00:48:15.794752 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:00.929) 0:00:24.073 ******** 2026-04-04 00:48:15.794756 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794760 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794764 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794767 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794771 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794775 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794778 | orchestrator | 2026-04-04 00:48:15.794782 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-04 00:48:15.794786 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:01.234) 0:00:25.308 ******** 2026-04-04 00:48:15.794790 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.794794 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.794798 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.794801 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794805 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794812 | orchestrator | 2026-04-04 00:48:15.794816 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-04 00:48:15.794820 | orchestrator | 2026-04-04 00:48:15.794824 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-04 00:48:15.794827 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:01.695) 0:00:27.004 ******** 2026-04-04 00:48:15.794831 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794835 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794839 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794842 | orchestrator | 2026-04-04 00:48:15.794846 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-04 00:48:15.794850 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:01.677) 0:00:28.682 ******** 2026-04-04 00:48:15.794853 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794857 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794861 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794865 | orchestrator | 2026-04-04 00:48:15.794868 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-04 00:48:15.794872 | orchestrator | Saturday 04 April 2026 00:44:13 +0000 (0:00:01.305) 0:00:29.987 ******** 2026-04-04 00:48:15.794876 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794882 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794886 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794890 | orchestrator | 2026-04-04 00:48:15.794893 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-04 00:48:15.794897 | orchestrator | Saturday 04 April 2026 00:44:14 +0000 (0:00:00.999) 0:00:30.987 ******** 2026-04-04 00:48:15.794901 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.794905 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.794908 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.794912 | orchestrator | 2026-04-04 00:48:15.794916 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-04 00:48:15.794920 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:01.227) 0:00:32.216 ******** 2026-04-04 00:48:15.794924 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.794928 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.794931 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.794935 | orchestrator | 2026-04-04 00:48:15.794939 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-04 00:48:15.794943 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:00.335) 0:00:32.552 ******** 2026-04-04 00:48:15.794946 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794950 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794954 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794957 | orchestrator | 2026-04-04 00:48:15.794961 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-04 00:48:15.794965 | orchestrator | Saturday 04 April 2026 00:44:16 +0000 (0:00:01.139) 0:00:33.691 ******** 2026-04-04 00:48:15.794969 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.794973 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.794976 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.794980 | orchestrator | 2026-04-04 00:48:15.794984 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-04 00:48:15.794988 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:01.668) 0:00:35.359 ******** 2026-04-04 00:48:15.794991 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:15.794995 | orchestrator | 2026-04-04 00:48:15.794999 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-04 00:48:15.795004 | orchestrator | Saturday 04 April 2026 00:44:19 +0000 (0:00:00.870) 0:00:36.230 ******** 2026-04-04 00:48:15.795008 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795012 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795016 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795019 | orchestrator | 2026-04-04 00:48:15.795023 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-04 00:48:15.795027 | orchestrator | Saturday 04 April 2026 00:44:21 +0000 (0:00:02.440) 0:00:38.671 ******** 2026-04-04 00:48:15.795030 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795034 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795038 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795042 | orchestrator | 2026-04-04 00:48:15.795045 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-04 00:48:15.795049 | orchestrator | Saturday 04 April 2026 00:44:23 +0000 (0:00:01.291) 0:00:39.962 ******** 2026-04-04 00:48:15.795053 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795057 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795060 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795064 | orchestrator | 2026-04-04 00:48:15.795068 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-04 00:48:15.795072 | orchestrator | Saturday 04 April 2026 00:44:24 +0000 (0:00:01.383) 0:00:41.346 ******** 2026-04-04 00:48:15.795075 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795079 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795083 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795089 | orchestrator | 2026-04-04 00:48:15.795093 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-04 00:48:15.795100 | orchestrator | Saturday 04 April 2026 00:44:25 +0000 (0:00:01.368) 0:00:42.714 ******** 2026-04-04 00:48:15.795104 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795108 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.795111 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795115 | orchestrator | 2026-04-04 00:48:15.795119 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-04 00:48:15.795123 | orchestrator | Saturday 04 April 2026 00:44:26 +0000 (0:00:00.343) 0:00:43.058 ******** 2026-04-04 00:48:15.795126 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.795130 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795134 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795138 | orchestrator | 2026-04-04 00:48:15.795142 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-04 00:48:15.795145 | orchestrator | Saturday 04 April 2026 00:44:26 +0000 (0:00:00.372) 0:00:43.431 ******** 2026-04-04 00:48:15.795149 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795153 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795157 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795160 | orchestrator | 2026-04-04 00:48:15.795164 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-04 00:48:15.795168 | orchestrator | Saturday 04 April 2026 00:44:29 +0000 (0:00:02.617) 0:00:46.049 ******** 2026-04-04 00:48:15.795172 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795176 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795179 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795183 | orchestrator | 2026-04-04 00:48:15.795187 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-04 00:48:15.795191 | orchestrator | Saturday 04 April 2026 00:44:32 +0000 (0:00:02.953) 0:00:49.003 ******** 2026-04-04 00:48:15.795195 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795198 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795202 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795206 | orchestrator | 2026-04-04 00:48:15.795210 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-04 00:48:15.795214 | orchestrator | Saturday 04 April 2026 00:44:32 +0000 (0:00:00.497) 0:00:49.501 ******** 2026-04-04 00:48:15.795218 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:48:15.795222 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:48:15.795225 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-04 00:48:15.795229 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:48:15.795233 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:48:15.795237 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-04 00:48:15.795241 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:48:15.795244 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:48:15.795248 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-04 00:48:15.795252 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:48:15.795258 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:48:15.795264 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-04 00:48:15.795268 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:48:15.795272 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:48:15.795275 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-04 00:48:15.795279 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795283 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795287 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795291 | orchestrator | 2026-04-04 00:48:15.795294 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-04 00:48:15.795298 | orchestrator | Saturday 04 April 2026 00:45:26 +0000 (0:00:54.050) 0:01:43.551 ******** 2026-04-04 00:48:15.795302 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.795306 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795310 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795313 | orchestrator | 2026-04-04 00:48:15.795317 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-04 00:48:15.795323 | orchestrator | Saturday 04 April 2026 00:45:27 +0000 (0:00:00.517) 0:01:44.069 ******** 2026-04-04 00:48:15.795327 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795331 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795334 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795338 | orchestrator | 2026-04-04 00:48:15.795342 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-04 00:48:15.795346 | orchestrator | Saturday 04 April 2026 00:45:28 +0000 (0:00:00.913) 0:01:44.983 ******** 2026-04-04 00:48:15.795350 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795353 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795357 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795361 | orchestrator | 2026-04-04 00:48:15.795365 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-04 00:48:15.795369 | orchestrator | Saturday 04 April 2026 00:45:29 +0000 (0:00:01.186) 0:01:46.170 ******** 2026-04-04 00:48:15.795372 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795376 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795380 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795384 | orchestrator | 2026-04-04 00:48:15.795387 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-04 00:48:15.795391 | orchestrator | Saturday 04 April 2026 00:45:53 +0000 (0:00:24.156) 0:02:10.327 ******** 2026-04-04 00:48:15.795395 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795398 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795402 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795406 | orchestrator | 2026-04-04 00:48:15.795409 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-04 00:48:15.795413 | orchestrator | Saturday 04 April 2026 00:45:54 +0000 (0:00:00.651) 0:02:10.978 ******** 2026-04-04 00:48:15.795417 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795421 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795424 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795428 | orchestrator | 2026-04-04 00:48:15.795432 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-04 00:48:15.795436 | orchestrator | Saturday 04 April 2026 00:45:55 +0000 (0:00:00.850) 0:02:11.829 ******** 2026-04-04 00:48:15.795442 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795446 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795449 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795453 | orchestrator | 2026-04-04 00:48:15.795457 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-04 00:48:15.795461 | orchestrator | Saturday 04 April 2026 00:45:55 +0000 (0:00:00.579) 0:02:12.409 ******** 2026-04-04 00:48:15.795464 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795468 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795472 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795475 | orchestrator | 2026-04-04 00:48:15.795479 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-04 00:48:15.795483 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:00.617) 0:02:13.027 ******** 2026-04-04 00:48:15.795487 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795490 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795494 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795498 | orchestrator | 2026-04-04 00:48:15.795502 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-04 00:48:15.795505 | orchestrator | Saturday 04 April 2026 00:45:56 +0000 (0:00:00.306) 0:02:13.333 ******** 2026-04-04 00:48:15.795509 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795513 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795516 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795520 | orchestrator | 2026-04-04 00:48:15.795524 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-04 00:48:15.795528 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.738) 0:02:14.071 ******** 2026-04-04 00:48:15.795531 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795535 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795539 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795543 | orchestrator | 2026-04-04 00:48:15.795547 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-04 00:48:15.795550 | orchestrator | Saturday 04 April 2026 00:45:57 +0000 (0:00:00.666) 0:02:14.738 ******** 2026-04-04 00:48:15.795554 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795558 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795562 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795565 | orchestrator | 2026-04-04 00:48:15.795569 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-04 00:48:15.795573 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.798) 0:02:15.536 ******** 2026-04-04 00:48:15.795577 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:15.795580 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:15.795584 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:15.795588 | orchestrator | 2026-04-04 00:48:15.795592 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-04 00:48:15.795605 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.769) 0:02:16.305 ******** 2026-04-04 00:48:15.795609 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.795612 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795616 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795620 | orchestrator | 2026-04-04 00:48:15.795624 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-04 00:48:15.795627 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.367) 0:02:16.673 ******** 2026-04-04 00:48:15.795631 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.795635 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.795639 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.795642 | orchestrator | 2026-04-04 00:48:15.795646 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-04 00:48:15.795650 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.245) 0:02:16.919 ******** 2026-04-04 00:48:15.795654 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795657 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795663 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795667 | orchestrator | 2026-04-04 00:48:15.795671 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-04 00:48:15.795675 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.822) 0:02:17.741 ******** 2026-04-04 00:48:15.795679 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.795685 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.795940 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.795947 | orchestrator | 2026-04-04 00:48:15.795951 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-04 00:48:15.795956 | orchestrator | Saturday 04 April 2026 00:46:01 +0000 (0:00:00.669) 0:02:18.411 ******** 2026-04-04 00:48:15.795960 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:48:15.795964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:48:15.795968 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-04 00:48:15.795971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:48:15.795975 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:48:15.795979 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-04 00:48:15.795983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:48:15.795987 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:48:15.795990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-04 00:48:15.795994 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-04 00:48:15.795998 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:48:15.796002 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:48:15.796005 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-04 00:48:15.796009 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:48:15.796013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:48:15.796017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-04 00:48:15.796020 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:48:15.796024 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:48:15.796028 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-04 00:48:15.796032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-04 00:48:15.796035 | orchestrator | 2026-04-04 00:48:15.796039 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-04 00:48:15.796043 | orchestrator | 2026-04-04 00:48:15.796047 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-04 00:48:15.796051 | orchestrator | Saturday 04 April 2026 00:46:05 +0000 (0:00:03.582) 0:02:21.994 ******** 2026-04-04 00:48:15.796054 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.796058 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.796062 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.796066 | orchestrator | 2026-04-04 00:48:15.796070 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-04 00:48:15.796079 | orchestrator | Saturday 04 April 2026 00:46:05 +0000 (0:00:00.301) 0:02:22.296 ******** 2026-04-04 00:48:15.796083 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.796087 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.796091 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.796095 | orchestrator | 2026-04-04 00:48:15.796098 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-04 00:48:15.796102 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:00.702) 0:02:22.998 ******** 2026-04-04 00:48:15.796106 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.796109 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.796113 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.796117 | orchestrator | 2026-04-04 00:48:15.796121 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-04 00:48:15.796124 | orchestrator | Saturday 04 April 2026 00:46:06 +0000 (0:00:00.416) 0:02:23.415 ******** 2026-04-04 00:48:15.796128 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:48:15.796132 | orchestrator | 2026-04-04 00:48:15.796136 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-04 00:48:15.796140 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:00.481) 0:02:23.896 ******** 2026-04-04 00:48:15.796143 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.796147 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.796151 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.796155 | orchestrator | 2026-04-04 00:48:15.796159 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-04 00:48:15.796162 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:00.357) 0:02:24.253 ******** 2026-04-04 00:48:15.796166 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.796170 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.796174 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.796178 | orchestrator | 2026-04-04 00:48:15.796182 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-04 00:48:15.796188 | orchestrator | Saturday 04 April 2026 00:46:07 +0000 (0:00:00.416) 0:02:24.670 ******** 2026-04-04 00:48:15.796193 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.796196 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.796200 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.796204 | orchestrator | 2026-04-04 00:48:15.796208 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-04 00:48:15.796212 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:00.311) 0:02:24.982 ******** 2026-04-04 00:48:15.796215 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.796219 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.796223 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.796227 | orchestrator | 2026-04-04 00:48:15.796230 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-04 00:48:15.796234 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:00.651) 0:02:25.633 ******** 2026-04-04 00:48:15.796238 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.796242 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.796245 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.796249 | orchestrator | 2026-04-04 00:48:15.796253 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-04 00:48:15.796257 | orchestrator | Saturday 04 April 2026 00:46:10 +0000 (0:00:01.206) 0:02:26.840 ******** 2026-04-04 00:48:15.796260 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.796264 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.796268 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.796272 | orchestrator | 2026-04-04 00:48:15.796275 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-04 00:48:15.796279 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:01.783) 0:02:28.623 ******** 2026-04-04 00:48:15.796286 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:48:15.796290 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:48:15.796294 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:48:15.796297 | orchestrator | 2026-04-04 00:48:15.796301 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-04 00:48:15.796305 | orchestrator | 2026-04-04 00:48:15.796309 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-04 00:48:15.796313 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:09.632) 0:02:38.256 ******** 2026-04-04 00:48:15.796316 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796320 | orchestrator | 2026-04-04 00:48:15.796324 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-04 00:48:15.796328 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.675) 0:02:38.931 ******** 2026-04-04 00:48:15.796331 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796335 | orchestrator | 2026-04-04 00:48:15.796339 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:48:15.796343 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.373) 0:02:39.305 ******** 2026-04-04 00:48:15.796347 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:48:15.796350 | orchestrator | 2026-04-04 00:48:15.796354 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:48:15.796358 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:00.536) 0:02:39.841 ******** 2026-04-04 00:48:15.796362 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796365 | orchestrator | 2026-04-04 00:48:15.796369 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-04 00:48:15.796373 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:00.869) 0:02:40.711 ******** 2026-04-04 00:48:15.796377 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796380 | orchestrator | 2026-04-04 00:48:15.796384 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-04 00:48:15.796388 | orchestrator | Saturday 04 April 2026 00:46:24 +0000 (0:00:00.509) 0:02:41.220 ******** 2026-04-04 00:48:15.796392 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:48:15.796396 | orchestrator | 2026-04-04 00:48:15.796399 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-04 00:48:15.796403 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:01.682) 0:02:42.903 ******** 2026-04-04 00:48:15.796409 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:48:15.796413 | orchestrator | 2026-04-04 00:48:15.796417 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-04 00:48:15.796420 | orchestrator | Saturday 04 April 2026 00:46:27 +0000 (0:00:00.881) 0:02:43.785 ******** 2026-04-04 00:48:15.796424 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796428 | orchestrator | 2026-04-04 00:48:15.796432 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-04 00:48:15.796436 | orchestrator | Saturday 04 April 2026 00:46:27 +0000 (0:00:00.441) 0:02:44.226 ******** 2026-04-04 00:48:15.796439 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796443 | orchestrator | 2026-04-04 00:48:15.796447 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-04 00:48:15.796451 | orchestrator | 2026-04-04 00:48:15.796455 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-04 00:48:15.796459 | orchestrator | Saturday 04 April 2026 00:46:28 +0000 (0:00:00.634) 0:02:44.860 ******** 2026-04-04 00:48:15.796462 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796466 | orchestrator | 2026-04-04 00:48:15.796470 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-04 00:48:15.796474 | orchestrator | Saturday 04 April 2026 00:46:28 +0000 (0:00:00.192) 0:02:45.052 ******** 2026-04-04 00:48:15.796478 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:48:15.796482 | orchestrator | 2026-04-04 00:48:15.796489 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-04 00:48:15.796492 | orchestrator | Saturday 04 April 2026 00:46:28 +0000 (0:00:00.240) 0:02:45.293 ******** 2026-04-04 00:48:15.796496 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796500 | orchestrator | 2026-04-04 00:48:15.796504 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-04 00:48:15.796508 | orchestrator | Saturday 04 April 2026 00:46:29 +0000 (0:00:01.006) 0:02:46.299 ******** 2026-04-04 00:48:15.796514 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796518 | orchestrator | 2026-04-04 00:48:15.796522 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-04 00:48:15.796525 | orchestrator | Saturday 04 April 2026 00:46:30 +0000 (0:00:01.309) 0:02:47.609 ******** 2026-04-04 00:48:15.796529 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796533 | orchestrator | 2026-04-04 00:48:15.796537 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-04 00:48:15.796540 | orchestrator | Saturday 04 April 2026 00:46:31 +0000 (0:00:00.998) 0:02:48.607 ******** 2026-04-04 00:48:15.796544 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796548 | orchestrator | 2026-04-04 00:48:15.796552 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-04 00:48:15.796555 | orchestrator | Saturday 04 April 2026 00:46:32 +0000 (0:00:00.531) 0:02:49.138 ******** 2026-04-04 00:48:15.796559 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796563 | orchestrator | 2026-04-04 00:48:15.796567 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-04 00:48:15.796570 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:06.863) 0:02:56.001 ******** 2026-04-04 00:48:15.796574 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.796578 | orchestrator | 2026-04-04 00:48:15.796582 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-04 00:48:15.796586 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:12.224) 0:03:08.226 ******** 2026-04-04 00:48:15.796589 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.796593 | orchestrator | 2026-04-04 00:48:15.796620 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-04 00:48:15.796624 | orchestrator | 2026-04-04 00:48:15.796628 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-04 00:48:15.796632 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.544) 0:03:08.770 ******** 2026-04-04 00:48:15.796636 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.796639 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.796643 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.796647 | orchestrator | 2026-04-04 00:48:15.796651 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-04 00:48:15.796655 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.568) 0:03:09.339 ******** 2026-04-04 00:48:15.796658 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796662 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.796666 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.796670 | orchestrator | 2026-04-04 00:48:15.796674 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-04 00:48:15.796678 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.371) 0:03:09.711 ******** 2026-04-04 00:48:15.796681 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-04-04 00:48:15.796685 | orchestrator | 2026-04-04 00:48:15.796689 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-04 00:48:15.796693 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:00.531) 0:03:10.243 ******** 2026-04-04 00:48:15.796697 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796701 | orchestrator | 2026-04-04 00:48:15.796705 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-04 00:48:15.796708 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:00.805) 0:03:11.048 ******** 2026-04-04 00:48:15.796715 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796719 | orchestrator | 2026-04-04 00:48:15.796723 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-04 00:48:15.796726 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.829) 0:03:11.878 ******** 2026-04-04 00:48:15.796730 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796734 | orchestrator | 2026-04-04 00:48:15.796738 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-04 00:48:15.796742 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.213) 0:03:12.091 ******** 2026-04-04 00:48:15.796748 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796752 | orchestrator | 2026-04-04 00:48:15.796756 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-04 00:48:15.796759 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:01.080) 0:03:13.171 ******** 2026-04-04 00:48:15.796763 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796767 | orchestrator | 2026-04-04 00:48:15.796771 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-04 00:48:15.796774 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.109) 0:03:13.281 ******** 2026-04-04 00:48:15.796778 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796782 | orchestrator | 2026-04-04 00:48:15.796786 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-04 00:48:15.796790 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.088) 0:03:13.369 ******** 2026-04-04 00:48:15.796794 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796798 | orchestrator | 2026-04-04 00:48:15.796801 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-04 00:48:15.796805 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.090) 0:03:13.460 ******** 2026-04-04 00:48:15.796809 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.796813 | orchestrator | 2026-04-04 00:48:15.796817 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-04 00:48:15.796820 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.087) 0:03:13.547 ******** 2026-04-04 00:48:15.796824 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796828 | orchestrator | 2026-04-04 00:48:15.796832 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-04 00:48:15.796836 | orchestrator | Saturday 04 April 2026 00:47:01 +0000 (0:00:05.096) 0:03:18.643 ******** 2026-04-04 00:48:15.796840 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-04 00:48:15.796931 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-04 00:48:15.796938 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-04 00:48:15.796941 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-04 00:48:15.796945 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-04 00:48:15.796949 | orchestrator | 2026-04-04 00:48:15.796953 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-04 00:48:15.796957 | orchestrator | Saturday 04 April 2026 00:47:45 +0000 (0:00:43.558) 0:04:02.201 ******** 2026-04-04 00:48:15.796961 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796965 | orchestrator | 2026-04-04 00:48:15.796968 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-04 00:48:15.796972 | orchestrator | Saturday 04 April 2026 00:47:46 +0000 (0:00:01.410) 0:04:03.612 ******** 2026-04-04 00:48:15.796976 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796980 | orchestrator | 2026-04-04 00:48:15.796984 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-04 00:48:15.796988 | orchestrator | Saturday 04 April 2026 00:47:48 +0000 (0:00:01.622) 0:04:05.234 ******** 2026-04-04 00:48:15.796995 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:48:15.796998 | orchestrator | 2026-04-04 00:48:15.797002 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-04 00:48:15.797006 | orchestrator | Saturday 04 April 2026 00:47:49 +0000 (0:00:01.122) 0:04:06.357 ******** 2026-04-04 00:48:15.797010 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.797014 | orchestrator | 2026-04-04 00:48:15.797018 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-04 00:48:15.797022 | orchestrator | Saturday 04 April 2026 00:47:49 +0000 (0:00:00.103) 0:04:06.460 ******** 2026-04-04 00:48:15.797026 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-04 00:48:15.797030 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-04 00:48:15.797034 | orchestrator | 2026-04-04 00:48:15.797037 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-04 00:48:15.797041 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:01.970) 0:04:08.431 ******** 2026-04-04 00:48:15.797045 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.797048 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.797052 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.797056 | orchestrator | 2026-04-04 00:48:15.797060 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-04 00:48:15.797063 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:00.287) 0:04:08.719 ******** 2026-04-04 00:48:15.797067 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.797071 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.797075 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.797079 | orchestrator | 2026-04-04 00:48:15.797083 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-04 00:48:15.797086 | orchestrator | 2026-04-04 00:48:15.797090 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-04 00:48:15.797094 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.828) 0:04:09.548 ******** 2026-04-04 00:48:15.797098 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:15.797102 | orchestrator | 2026-04-04 00:48:15.797106 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-04 00:48:15.797110 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.114) 0:04:09.662 ******** 2026-04-04 00:48:15.797114 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-04 00:48:15.797117 | orchestrator | 2026-04-04 00:48:15.797121 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-04 00:48:15.797125 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:00.269) 0:04:09.931 ******** 2026-04-04 00:48:15.797129 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:15.797132 | orchestrator | 2026-04-04 00:48:15.797138 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-04 00:48:15.797142 | orchestrator | 2026-04-04 00:48:15.797146 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-04 00:48:15.797150 | orchestrator | Saturday 04 April 2026 00:47:58 +0000 (0:00:05.442) 0:04:15.374 ******** 2026-04-04 00:48:15.797154 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:48:15.797157 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:48:15.797161 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:48:15.797165 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:15.797169 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:15.797172 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:15.797176 | orchestrator | 2026-04-04 00:48:15.797180 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-04 00:48:15.797184 | orchestrator | Saturday 04 April 2026 00:47:59 +0000 (0:00:00.499) 0:04:15.873 ******** 2026-04-04 00:48:15.797187 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:48:15.797194 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:48:15.797198 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:48:15.797201 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:48:15.797205 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-04 00:48:15.797209 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-04 00:48:15.797213 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:48:15.797217 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:48:15.797223 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:48:15.797227 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:48:15.797231 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-04 00:48:15.797235 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-04 00:48:15.797238 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:48:15.797242 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:48:15.797246 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-04 00:48:15.797250 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:48:15.797253 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:48:15.797257 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:48:15.797261 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-04 00:48:15.797265 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:48:15.797268 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-04 00:48:15.797272 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:48:15.797276 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:48:15.797280 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:48:15.797283 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-04 00:48:15.797287 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:48:15.797291 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:48:15.797295 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:48:15.797298 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-04 00:48:15.797302 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-04 00:48:15.797306 | orchestrator | 2026-04-04 00:48:15.797310 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-04 00:48:15.797314 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:13.126) 0:04:29.000 ******** 2026-04-04 00:48:15.797317 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.797321 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.797325 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.797329 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.797333 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.797336 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.797343 | orchestrator | 2026-04-04 00:48:15.797347 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-04 00:48:15.797351 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:00.458) 0:04:29.458 ******** 2026-04-04 00:48:15.797354 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:48:15.797358 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:48:15.797362 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:48:15.797366 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:15.797369 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:15.797375 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:15.797379 | orchestrator | 2026-04-04 00:48:15.797382 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:15.797386 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:48:15.797391 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-04 00:48:15.797395 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-04 00:48:15.797398 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-04 00:48:15.797402 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:48:15.797406 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:48:15.797410 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:48:15.797413 | orchestrator | 2026-04-04 00:48:15.797417 | orchestrator | 2026-04-04 00:48:15.797421 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:15.797427 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.506) 0:04:29.965 ******** 2026-04-04 00:48:15.797431 | orchestrator | =============================================================================== 2026-04-04 00:48:15.797435 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.05s 2026-04-04 00:48:15.797439 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.56s 2026-04-04 00:48:15.797443 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.16s 2026-04-04 00:48:15.797446 | orchestrator | Manage labels ---------------------------------------------------------- 13.13s 2026-04-04 00:48:15.797450 | orchestrator | kubectl : Install required packages ------------------------------------ 12.22s 2026-04-04 00:48:15.797454 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.63s 2026-04-04 00:48:15.797458 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.86s 2026-04-04 00:48:15.797461 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.44s 2026-04-04 00:48:15.797465 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.10s 2026-04-04 00:48:15.797469 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.86s 2026-04-04 00:48:15.797473 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.58s 2026-04-04 00:48:15.797476 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.42s 2026-04-04 00:48:15.797480 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.95s 2026-04-04 00:48:15.797484 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.62s 2026-04-04 00:48:15.797490 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.44s 2026-04-04 00:48:15.797494 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.98s 2026-04-04 00:48:15.797498 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.97s 2026-04-04 00:48:15.797502 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 1.81s 2026-04-04 00:48:15.797506 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.78s 2026-04-04 00:48:15.797509 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.70s 2026-04-04 00:48:15.797513 | orchestrator | 2026-04-04 00:48:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:18.831509 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:18.833028 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task e8d28cd4-af22-4fd8-8ec4-a1df31f3d769 is in state STARTED 2026-04-04 00:48:18.833076 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:18.833375 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:18.833713 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:18.835272 | orchestrator | 2026-04-04 00:48:18 | INFO  | Task 34e7e9dc-9f06-4b5c-af39-dd6db7ed7c4b is in state STARTED 2026-04-04 00:48:18.835335 | orchestrator | 2026-04-04 00:48:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:21.862953 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:21.863019 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task e8d28cd4-af22-4fd8-8ec4-a1df31f3d769 is in state SUCCESS 2026-04-04 00:48:21.863194 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:21.865948 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:21.866370 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:21.867383 | orchestrator | 2026-04-04 00:48:21 | INFO  | Task 34e7e9dc-9f06-4b5c-af39-dd6db7ed7c4b is in state STARTED 2026-04-04 00:48:21.867427 | orchestrator | 2026-04-04 00:48:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:24.892357 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:24.892443 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:24.892452 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state STARTED 2026-04-04 00:48:24.892928 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:24.893762 | orchestrator | 2026-04-04 00:48:24 | INFO  | Task 34e7e9dc-9f06-4b5c-af39-dd6db7ed7c4b is in state SUCCESS 2026-04-04 00:48:24.893794 | orchestrator | 2026-04-04 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:27.931667 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:27.931997 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:27.934097 | orchestrator | 2026-04-04 00:48:27.934152 | orchestrator | 2026-04-04 00:48:27.934158 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-04 00:48:27.934164 | orchestrator | 2026-04-04 00:48:27.934168 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:48:27.934173 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:00.221) 0:00:00.221 ******** 2026-04-04 00:48:27.934178 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:48:27.934183 | orchestrator | 2026-04-04 00:48:27.934188 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:48:27.934193 | orchestrator | Saturday 04 April 2026 00:48:17 +0000 (0:00:00.957) 0:00:01.179 ******** 2026-04-04 00:48:27.934197 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:27.934202 | orchestrator | 2026-04-04 00:48:27.934207 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-04 00:48:27.934211 | orchestrator | Saturday 04 April 2026 00:48:19 +0000 (0:00:01.669) 0:00:02.849 ******** 2026-04-04 00:48:27.934216 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:27.934220 | orchestrator | 2026-04-04 00:48:27.934225 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:27.934230 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:48:27.934235 | orchestrator | 2026-04-04 00:48:27.934240 | orchestrator | 2026-04-04 00:48:27.934244 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:27.934249 | orchestrator | Saturday 04 April 2026 00:48:20 +0000 (0:00:00.591) 0:00:03.440 ******** 2026-04-04 00:48:27.934253 | orchestrator | =============================================================================== 2026-04-04 00:48:27.934258 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.67s 2026-04-04 00:48:27.934262 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.96s 2026-04-04 00:48:27.934267 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.59s 2026-04-04 00:48:27.934271 | orchestrator | 2026-04-04 00:48:27.934276 | orchestrator | 2026-04-04 00:48:27.934280 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-04 00:48:27.934285 | orchestrator | 2026-04-04 00:48:27.934289 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-04 00:48:27.934294 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-04-04 00:48:27.934298 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:27.934304 | orchestrator | 2026-04-04 00:48:27.934308 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-04 00:48:27.934313 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:00.667) 0:00:00.890 ******** 2026-04-04 00:48:27.934317 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:27.934322 | orchestrator | 2026-04-04 00:48:27.934326 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-04 00:48:27.934331 | orchestrator | Saturday 04 April 2026 00:48:17 +0000 (0:00:00.506) 0:00:01.397 ******** 2026-04-04 00:48:27.934335 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-04 00:48:27.934340 | orchestrator | 2026-04-04 00:48:27.934344 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-04 00:48:27.934349 | orchestrator | Saturday 04 April 2026 00:48:18 +0000 (0:00:00.918) 0:00:02.316 ******** 2026-04-04 00:48:27.934354 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:27.934358 | orchestrator | 2026-04-04 00:48:27.934370 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-04 00:48:27.934375 | orchestrator | Saturday 04 April 2026 00:48:19 +0000 (0:00:01.085) 0:00:03.401 ******** 2026-04-04 00:48:27.934379 | orchestrator | changed: [testbed-manager] 2026-04-04 00:48:27.934384 | orchestrator | 2026-04-04 00:48:27.934389 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-04 00:48:27.934393 | orchestrator | Saturday 04 April 2026 00:48:20 +0000 (0:00:00.645) 0:00:04.047 ******** 2026-04-04 00:48:27.934402 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:48:27.934407 | orchestrator | 2026-04-04 00:48:27.934411 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-04 00:48:27.934416 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:01.477) 0:00:05.524 ******** 2026-04-04 00:48:27.934421 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:48:27.934425 | orchestrator | 2026-04-04 00:48:27.934430 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-04 00:48:27.934434 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.670) 0:00:06.195 ******** 2026-04-04 00:48:27.934439 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:27.934443 | orchestrator | 2026-04-04 00:48:27.934448 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-04 00:48:27.934452 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.308) 0:00:06.503 ******** 2026-04-04 00:48:27.934457 | orchestrator | ok: [testbed-manager] 2026-04-04 00:48:27.934461 | orchestrator | 2026-04-04 00:48:27.934466 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:27.934471 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:48:27.934475 | orchestrator | 2026-04-04 00:48:27.934480 | orchestrator | 2026-04-04 00:48:27.934484 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:27.934489 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:00.296) 0:00:06.799 ******** 2026-04-04 00:48:27.934493 | orchestrator | =============================================================================== 2026-04-04 00:48:27.934498 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.48s 2026-04-04 00:48:27.934502 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.09s 2026-04-04 00:48:27.934508 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.92s 2026-04-04 00:48:27.934544 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.67s 2026-04-04 00:48:27.934550 | orchestrator | Get home directory of operator user ------------------------------------- 0.67s 2026-04-04 00:48:27.934555 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.65s 2026-04-04 00:48:27.934559 | orchestrator | Create .kube directory -------------------------------------------------- 0.51s 2026-04-04 00:48:27.934564 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.31s 2026-04-04 00:48:27.934569 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-04-04 00:48:27.934573 | orchestrator | 2026-04-04 00:48:27.934578 | orchestrator | 2026-04-04 00:48:27.934582 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-04-04 00:48:27.934599 | orchestrator | 2026-04-04 00:48:27.934605 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-04 00:48:27.934609 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:00.114) 0:00:00.114 ******** 2026-04-04 00:48:27.934614 | orchestrator | ok: [localhost] => { 2026-04-04 00:48:27.934619 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-04-04 00:48:27.934623 | orchestrator | } 2026-04-04 00:48:27.934628 | orchestrator | 2026-04-04 00:48:27.934633 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-04-04 00:48:27.934637 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:00.043) 0:00:00.158 ******** 2026-04-04 00:48:27.934642 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-04-04 00:48:27.934647 | orchestrator | ...ignoring 2026-04-04 00:48:27.934652 | orchestrator | 2026-04-04 00:48:27.934656 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-04-04 00:48:27.934664 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:02.957) 0:00:03.116 ******** 2026-04-04 00:48:27.934669 | orchestrator | skipping: [localhost] 2026-04-04 00:48:27.934673 | orchestrator | 2026-04-04 00:48:27.934678 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-04-04 00:48:27.934682 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:00.121) 0:00:03.238 ******** 2026-04-04 00:48:27.934687 | orchestrator | ok: [localhost] 2026-04-04 00:48:27.934691 | orchestrator | 2026-04-04 00:48:27.934696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:48:27.934700 | orchestrator | 2026-04-04 00:48:27.934705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:48:27.934709 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.209) 0:00:03.447 ******** 2026-04-04 00:48:27.934714 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:27.934718 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:27.934723 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:27.934727 | orchestrator | 2026-04-04 00:48:27.934732 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:48:27.934736 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.411) 0:00:03.859 ******** 2026-04-04 00:48:27.934741 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-04 00:48:27.934745 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-04 00:48:27.934750 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-04 00:48:27.934754 | orchestrator | 2026-04-04 00:48:27.934762 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-04 00:48:27.934766 | orchestrator | 2026-04-04 00:48:27.934771 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:48:27.934775 | orchestrator | Saturday 04 April 2026 00:46:16 +0000 (0:00:00.587) 0:00:04.447 ******** 2026-04-04 00:48:27.934780 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:27.934785 | orchestrator | 2026-04-04 00:48:27.934789 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-04 00:48:27.934794 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:01.226) 0:00:05.674 ******** 2026-04-04 00:48:27.934798 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:27.934803 | orchestrator | 2026-04-04 00:48:27.934807 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-04 00:48:27.934812 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:01.269) 0:00:06.943 ******** 2026-04-04 00:48:27.934816 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934821 | orchestrator | 2026-04-04 00:48:27.934825 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-04 00:48:27.934830 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:00.336) 0:00:07.279 ******** 2026-04-04 00:48:27.934834 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934839 | orchestrator | 2026-04-04 00:48:27.934843 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-04 00:48:27.934848 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.531) 0:00:07.811 ******** 2026-04-04 00:48:27.934852 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934857 | orchestrator | 2026-04-04 00:48:27.934861 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-04 00:48:27.934866 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.407) 0:00:08.218 ******** 2026-04-04 00:48:27.934870 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934875 | orchestrator | 2026-04-04 00:48:27.934879 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:48:27.934884 | orchestrator | Saturday 04 April 2026 00:46:20 +0000 (0:00:00.688) 0:00:08.907 ******** 2026-04-04 00:48:27.934888 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:27.934895 | orchestrator | 2026-04-04 00:48:27.934900 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-04 00:48:27.934908 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:00.610) 0:00:09.517 ******** 2026-04-04 00:48:27.934913 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:27.934918 | orchestrator | 2026-04-04 00:48:27.934926 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-04 00:48:27.934933 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:00.848) 0:00:10.365 ******** 2026-04-04 00:48:27.934945 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934953 | orchestrator | 2026-04-04 00:48:27.934960 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-04 00:48:27.934968 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.617) 0:00:10.983 ******** 2026-04-04 00:48:27.934975 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.934981 | orchestrator | 2026-04-04 00:48:27.934988 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-04 00:48:27.934996 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.334) 0:00:11.317 ******** 2026-04-04 00:48:27.935007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935043 | orchestrator | 2026-04-04 00:48:27.935051 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-04 00:48:27.935059 | orchestrator | Saturday 04 April 2026 00:46:24 +0000 (0:00:01.472) 0:00:12.790 ******** 2026-04-04 00:48:27.935073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935103 | orchestrator | 2026-04-04 00:48:27.935108 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-04 00:48:27.935113 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:02.254) 0:00:15.044 ******** 2026-04-04 00:48:27.935121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:27.935126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:27.935130 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-04 00:48:27.935135 | orchestrator | 2026-04-04 00:48:27.935139 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-04 00:48:27.935144 | orchestrator | Saturday 04 April 2026 00:46:29 +0000 (0:00:03.051) 0:00:18.096 ******** 2026-04-04 00:48:27.935148 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:48:27.935153 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:48:27.935158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-04 00:48:27.935162 | orchestrator | 2026-04-04 00:48:27.935167 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-04 00:48:27.935174 | orchestrator | Saturday 04 April 2026 00:46:32 +0000 (0:00:03.030) 0:00:21.127 ******** 2026-04-04 00:48:27.935179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:48:27.935183 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:48:27.935188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-04 00:48:27.935192 | orchestrator | 2026-04-04 00:48:27.935197 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-04 00:48:27.935201 | orchestrator | Saturday 04 April 2026 00:46:34 +0000 (0:00:01.399) 0:00:22.527 ******** 2026-04-04 00:48:27.935206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:48:27.935211 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:48:27.935215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-04 00:48:27.935220 | orchestrator | 2026-04-04 00:48:27.935224 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-04 00:48:27.935229 | orchestrator | Saturday 04 April 2026 00:46:36 +0000 (0:00:01.917) 0:00:24.444 ******** 2026-04-04 00:48:27.935233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:48:27.935238 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:48:27.935242 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-04 00:48:27.935247 | orchestrator | 2026-04-04 00:48:27.935251 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-04 00:48:27.935256 | orchestrator | Saturday 04 April 2026 00:46:37 +0000 (0:00:01.782) 0:00:26.226 ******** 2026-04-04 00:48:27.935260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:48:27.935265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:48:27.935270 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-04 00:48:27.935274 | orchestrator | 2026-04-04 00:48:27.935279 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-04 00:48:27.935283 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:01.951) 0:00:28.178 ******** 2026-04-04 00:48:27.935288 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:27.935293 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.935297 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:27.935302 | orchestrator | 2026-04-04 00:48:27.935306 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-04 00:48:27.935314 | orchestrator | Saturday 04 April 2026 00:46:40 +0000 (0:00:00.739) 0:00:28.917 ******** 2026-04-04 00:48:27.935321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:48:27.935340 | orchestrator | 2026-04-04 00:48:27.935345 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-04 00:48:27.935349 | orchestrator | Saturday 04 April 2026 00:46:42 +0000 (0:00:01.589) 0:00:30.507 ******** 2026-04-04 00:48:27.935354 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:27.935358 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:27.935363 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:27.935367 | orchestrator | 2026-04-04 00:48:27.935372 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-04 00:48:27.935376 | orchestrator | Saturday 04 April 2026 00:46:43 +0000 (0:00:01.275) 0:00:31.783 ******** 2026-04-04 00:48:27.935384 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:27.935388 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:27.935393 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:27.935398 | orchestrator | 2026-04-04 00:48:27.935402 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-04 00:48:27.935407 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:08.821) 0:00:40.604 ******** 2026-04-04 00:48:27.935411 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:27.935416 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:27.935421 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:27.935425 | orchestrator | 2026-04-04 00:48:27.935430 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:48:27.935434 | orchestrator | 2026-04-04 00:48:27.935439 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:48:27.935443 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.636) 0:00:41.240 ******** 2026-04-04 00:48:27.935448 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:27.935453 | orchestrator | 2026-04-04 00:48:27.935459 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:48:27.935464 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:00.687) 0:00:41.927 ******** 2026-04-04 00:48:27.935469 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:48:27.935473 | orchestrator | 2026-04-04 00:48:27.935478 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:48:27.935483 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:00.231) 0:00:42.159 ******** 2026-04-04 00:48:27.935487 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:27.935492 | orchestrator | 2026-04-04 00:48:27.935496 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:48:27.935501 | orchestrator | Saturday 04 April 2026 00:47:00 +0000 (0:00:06.658) 0:00:48.817 ******** 2026-04-04 00:48:27.935505 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:48:27.935510 | orchestrator | 2026-04-04 00:48:27.935514 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:48:27.935519 | orchestrator | 2026-04-04 00:48:27.935524 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:48:27.935528 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:51.248) 0:01:40.066 ******** 2026-04-04 00:48:27.935533 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:27.935537 | orchestrator | 2026-04-04 00:48:27.935542 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:48:27.935546 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.555) 0:01:40.622 ******** 2026-04-04 00:48:27.935551 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:48:27.935556 | orchestrator | 2026-04-04 00:48:27.935560 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:48:27.935565 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.195) 0:01:40.818 ******** 2026-04-04 00:48:27.935569 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:27.935574 | orchestrator | 2026-04-04 00:48:27.935578 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:48:27.935583 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:01.664) 0:01:42.482 ******** 2026-04-04 00:48:27.935630 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:48:27.935636 | orchestrator | 2026-04-04 00:48:27.935641 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-04 00:48:27.935645 | orchestrator | 2026-04-04 00:48:27.935650 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-04 00:48:27.935654 | orchestrator | Saturday 04 April 2026 00:48:07 +0000 (0:00:13.037) 0:01:55.519 ******** 2026-04-04 00:48:27.935659 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:27.935664 | orchestrator | 2026-04-04 00:48:27.935671 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-04 00:48:27.935676 | orchestrator | Saturday 04 April 2026 00:48:07 +0000 (0:00:00.705) 0:01:56.225 ******** 2026-04-04 00:48:27.935688 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:48:27.935692 | orchestrator | 2026-04-04 00:48:27.935697 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-04 00:48:27.935702 | orchestrator | Saturday 04 April 2026 00:48:08 +0000 (0:00:00.481) 0:01:56.706 ******** 2026-04-04 00:48:27.935706 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:27.935711 | orchestrator | 2026-04-04 00:48:27.935715 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-04 00:48:27.935720 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:01.542) 0:01:58.248 ******** 2026-04-04 00:48:27.935725 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:48:27.935729 | orchestrator | 2026-04-04 00:48:27.935734 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-04 00:48:27.935738 | orchestrator | 2026-04-04 00:48:27.935743 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-04 00:48:27.935747 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:13.421) 0:02:11.670 ******** 2026-04-04 00:48:27.935752 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:48:27.935756 | orchestrator | 2026-04-04 00:48:27.935761 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-04 00:48:27.935765 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:00.527) 0:02:12.198 ******** 2026-04-04 00:48:27.935770 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:48:27.935775 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:48:27.935779 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:48:27.935784 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-04 00:48:27.935788 | orchestrator | enable_outward_rabbitmq_True 2026-04-04 00:48:27.935793 | orchestrator | 2026-04-04 00:48:27.935797 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-04 00:48:27.935802 | orchestrator | skipping: no hosts matched 2026-04-04 00:48:27.935806 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-04 00:48:27.935811 | orchestrator | outward_rabbitmq_restart 2026-04-04 00:48:27.935816 | orchestrator | 2026-04-04 00:48:27.935820 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-04 00:48:27.935825 | orchestrator | skipping: no hosts matched 2026-04-04 00:48:27.935829 | orchestrator | 2026-04-04 00:48:27.935834 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-04 00:48:27.935838 | orchestrator | skipping: no hosts matched 2026-04-04 00:48:27.935843 | orchestrator | 2026-04-04 00:48:27.935847 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:48:27.935852 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-04 00:48:27.935857 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 00:48:27.935861 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:48:27.935866 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:48:27.935871 | orchestrator | 2026-04-04 00:48:27.935876 | orchestrator | 2026-04-04 00:48:27.935880 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:48:27.935885 | orchestrator | Saturday 04 April 2026 00:48:27 +0000 (0:00:03.638) 0:02:15.837 ******** 2026-04-04 00:48:27.935889 | orchestrator | =============================================================================== 2026-04-04 00:48:27.935894 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 77.71s 2026-04-04 00:48:27.935898 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.87s 2026-04-04 00:48:27.935906 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.82s 2026-04-04 00:48:27.935910 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.64s 2026-04-04 00:48:27.935915 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.05s 2026-04-04 00:48:27.935919 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.03s 2026-04-04 00:48:27.935924 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.96s 2026-04-04 00:48:27.935930 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.25s 2026-04-04 00:48:27.935938 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.95s 2026-04-04 00:48:27.935950 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-04-04 00:48:27.935958 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.92s 2026-04-04 00:48:27.935965 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.78s 2026-04-04 00:48:27.935972 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.59s 2026-04-04 00:48:27.935979 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.47s 2026-04-04 00:48:27.935986 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2026-04-04 00:48:27.936037 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.28s 2026-04-04 00:48:27.936057 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2026-04-04 00:48:27.936072 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.23s 2026-04-04 00:48:27.936080 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.91s 2026-04-04 00:48:27.936087 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-04-04 00:48:27.936095 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 3d459c63-ad65-4729-bf42-e3d0b5d6225a is in state SUCCESS 2026-04-04 00:48:27.936165 | orchestrator | 2026-04-04 00:48:27 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:27.936177 | orchestrator | 2026-04-04 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:30.972635 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:30.973368 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:30.974469 | orchestrator | 2026-04-04 00:48:30 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:30.974512 | orchestrator | 2026-04-04 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:34.007218 | orchestrator | 2026-04-04 00:48:34 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:34.010078 | orchestrator | 2026-04-04 00:48:34 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:34.010161 | orchestrator | 2026-04-04 00:48:34 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:34.010170 | orchestrator | 2026-04-04 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:37.039449 | orchestrator | 2026-04-04 00:48:37 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:37.042956 | orchestrator | 2026-04-04 00:48:37 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:37.043896 | orchestrator | 2026-04-04 00:48:37 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:37.043942 | orchestrator | 2026-04-04 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:40.082044 | orchestrator | 2026-04-04 00:48:40 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:40.082719 | orchestrator | 2026-04-04 00:48:40 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:40.084489 | orchestrator | 2026-04-04 00:48:40 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:40.084706 | orchestrator | 2026-04-04 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:43.139951 | orchestrator | 2026-04-04 00:48:43 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:43.140004 | orchestrator | 2026-04-04 00:48:43 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:43.140781 | orchestrator | 2026-04-04 00:48:43 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:43.140816 | orchestrator | 2026-04-04 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:46.172026 | orchestrator | 2026-04-04 00:48:46 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:46.173495 | orchestrator | 2026-04-04 00:48:46 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:46.174974 | orchestrator | 2026-04-04 00:48:46 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:46.175027 | orchestrator | 2026-04-04 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:49.208913 | orchestrator | 2026-04-04 00:48:49 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:49.209443 | orchestrator | 2026-04-04 00:48:49 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:49.210268 | orchestrator | 2026-04-04 00:48:49 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:49.210301 | orchestrator | 2026-04-04 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:52.245247 | orchestrator | 2026-04-04 00:48:52 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:52.245308 | orchestrator | 2026-04-04 00:48:52 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:52.247608 | orchestrator | 2026-04-04 00:48:52 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:52.247655 | orchestrator | 2026-04-04 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:55.292848 | orchestrator | 2026-04-04 00:48:55 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:55.294163 | orchestrator | 2026-04-04 00:48:55 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:55.296054 | orchestrator | 2026-04-04 00:48:55 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:55.296690 | orchestrator | 2026-04-04 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:48:58.326416 | orchestrator | 2026-04-04 00:48:58 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:48:58.328128 | orchestrator | 2026-04-04 00:48:58 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:48:58.328906 | orchestrator | 2026-04-04 00:48:58 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:48:58.328949 | orchestrator | 2026-04-04 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:01.366402 | orchestrator | 2026-04-04 00:49:01 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:01.368170 | orchestrator | 2026-04-04 00:49:01 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:01.369043 | orchestrator | 2026-04-04 00:49:01 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:01.369073 | orchestrator | 2026-04-04 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:04.410302 | orchestrator | 2026-04-04 00:49:04 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:04.412755 | orchestrator | 2026-04-04 00:49:04 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:04.415171 | orchestrator | 2026-04-04 00:49:04 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:04.415221 | orchestrator | 2026-04-04 00:49:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:07.454043 | orchestrator | 2026-04-04 00:49:07 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:07.454410 | orchestrator | 2026-04-04 00:49:07 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:07.455243 | orchestrator | 2026-04-04 00:49:07 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:07.455310 | orchestrator | 2026-04-04 00:49:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:10.621498 | orchestrator | 2026-04-04 00:49:10 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:10.621855 | orchestrator | 2026-04-04 00:49:10 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:10.623086 | orchestrator | 2026-04-04 00:49:10 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:10.623129 | orchestrator | 2026-04-04 00:49:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:13.668281 | orchestrator | 2026-04-04 00:49:13 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:13.668779 | orchestrator | 2026-04-04 00:49:13 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:13.670159 | orchestrator | 2026-04-04 00:49:13 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:13.670189 | orchestrator | 2026-04-04 00:49:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:16.712246 | orchestrator | 2026-04-04 00:49:16 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:16.713963 | orchestrator | 2026-04-04 00:49:16 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:16.715371 | orchestrator | 2026-04-04 00:49:16 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state STARTED 2026-04-04 00:49:16.715441 | orchestrator | 2026-04-04 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:19.749991 | orchestrator | 2026-04-04 00:49:19 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:19.750091 | orchestrator | 2026-04-04 00:49:19 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:19.750964 | orchestrator | 2026-04-04 00:49:19 | INFO  | Task 390c6e62-a157-41b9-9f50-8d897084412d is in state SUCCESS 2026-04-04 00:49:19.752007 | orchestrator | 2026-04-04 00:49:19.752050 | orchestrator | 2026-04-04 00:49:19.752059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:49:19.752066 | orchestrator | 2026-04-04 00:49:19.752073 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:49:19.752095 | orchestrator | Saturday 04 April 2026 00:46:58 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-04-04 00:49:19.752101 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:19.752108 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:19.752114 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:19.752121 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.752126 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.752133 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.752140 | orchestrator | 2026-04-04 00:49:19.752146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:49:19.752151 | orchestrator | Saturday 04 April 2026 00:46:59 +0000 (0:00:01.006) 0:00:01.201 ******** 2026-04-04 00:49:19.752158 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-04 00:49:19.752165 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-04 00:49:19.752172 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-04 00:49:19.752178 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-04 00:49:19.752185 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-04 00:49:19.752192 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-04 00:49:19.752198 | orchestrator | 2026-04-04 00:49:19.752204 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-04 00:49:19.752211 | orchestrator | 2026-04-04 00:49:19.752218 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-04 00:49:19.752224 | orchestrator | Saturday 04 April 2026 00:47:00 +0000 (0:00:01.114) 0:00:02.316 ******** 2026-04-04 00:49:19.752231 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:19.752238 | orchestrator | 2026-04-04 00:49:19.752245 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-04 00:49:19.752251 | orchestrator | Saturday 04 April 2026 00:47:01 +0000 (0:00:01.266) 0:00:03.583 ******** 2026-04-04 00:49:19.752260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752412 | orchestrator | 2026-04-04 00:49:19.752431 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-04 00:49:19.752438 | orchestrator | Saturday 04 April 2026 00:47:03 +0000 (0:00:02.107) 0:00:05.690 ******** 2026-04-04 00:49:19.752445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752495 | orchestrator | 2026-04-04 00:49:19.752503 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-04 00:49:19.752511 | orchestrator | Saturday 04 April 2026 00:47:05 +0000 (0:00:01.396) 0:00:07.086 ******** 2026-04-04 00:49:19.752519 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752592 | orchestrator | 2026-04-04 00:49:19.752598 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-04 00:49:19.752605 | orchestrator | Saturday 04 April 2026 00:47:06 +0000 (0:00:01.209) 0:00:08.296 ******** 2026-04-04 00:49:19.752612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752686 | orchestrator | 2026-04-04 00:49:19.752698 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-04 00:49:19.752705 | orchestrator | Saturday 04 April 2026 00:47:08 +0000 (0:00:02.035) 0:00:10.331 ******** 2026-04-04 00:49:19.752713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.752767 | orchestrator | 2026-04-04 00:49:19.752773 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-04 00:49:19.752781 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:01.350) 0:00:11.682 ******** 2026-04-04 00:49:19.752788 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:19.752797 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:19.752805 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:19.752812 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.752820 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.752827 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.752834 | orchestrator | 2026-04-04 00:49:19.752841 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-04 00:49:19.752848 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:02.277) 0:00:13.959 ******** 2026-04-04 00:49:19.752855 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-04 00:49:19.752863 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-04 00:49:19.752870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-04 00:49:19.752877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-04 00:49:19.752883 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-04 00:49:19.752891 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-04 00:49:19.752898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752906 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-04 00:49:19.752948 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752956 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752963 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752977 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752984 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-04 00:49:19.752992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753000 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753017 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753031 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753039 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-04 00:49:19.753046 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753060 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753074 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-04 00:49:19.753092 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753100 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753121 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753242 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-04 00:49:19.753259 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:49:19.753268 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:49:19.753274 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:49:19.753281 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:49:19.753287 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-04 00:49:19.753294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-04 00:49:19.753301 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-04 00:49:19.753309 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-04 00:49:19.753332 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-04 00:49:19.753341 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-04 00:49:19.753348 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-04 00:49:19.753355 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-04 00:49:19.753363 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:49:19.753377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:49:19.753385 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:49:19.753392 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-04 00:49:19.753399 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:49:19.753406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-04 00:49:19.753414 | orchestrator | 2026-04-04 00:49:19.753421 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753429 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:17.108) 0:00:31.067 ******** 2026-04-04 00:49:19.753436 | orchestrator | 2026-04-04 00:49:19.753443 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753450 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.093) 0:00:31.161 ******** 2026-04-04 00:49:19.753458 | orchestrator | 2026-04-04 00:49:19.753465 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753473 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.135) 0:00:31.296 ******** 2026-04-04 00:49:19.753480 | orchestrator | 2026-04-04 00:49:19.753488 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753495 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.144) 0:00:31.441 ******** 2026-04-04 00:49:19.753502 | orchestrator | 2026-04-04 00:49:19.753510 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753517 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.133) 0:00:31.575 ******** 2026-04-04 00:49:19.753525 | orchestrator | 2026-04-04 00:49:19.753532 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-04 00:49:19.753539 | orchestrator | Saturday 04 April 2026 00:47:30 +0000 (0:00:00.135) 0:00:31.710 ******** 2026-04-04 00:49:19.753559 | orchestrator | 2026-04-04 00:49:19.753566 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-04 00:49:19.753573 | orchestrator | Saturday 04 April 2026 00:47:30 +0000 (0:00:00.108) 0:00:31.819 ******** 2026-04-04 00:49:19.753586 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753593 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753601 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:49:19.753607 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:49:19.753613 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753620 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:49:19.753627 | orchestrator | 2026-04-04 00:49:19.753635 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-04 00:49:19.753642 | orchestrator | Saturday 04 April 2026 00:47:32 +0000 (0:00:01.918) 0:00:33.737 ******** 2026-04-04 00:49:19.753649 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.753657 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:49:19.753665 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:49:19.753671 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.753678 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.753684 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:49:19.753691 | orchestrator | 2026-04-04 00:49:19.753697 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-04 00:49:19.753704 | orchestrator | 2026-04-04 00:49:19.753711 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:49:19.753717 | orchestrator | Saturday 04 April 2026 00:48:03 +0000 (0:00:31.024) 0:01:04.762 ******** 2026-04-04 00:49:19.753723 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:19.753737 | orchestrator | 2026-04-04 00:49:19.753744 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:49:19.753752 | orchestrator | Saturday 04 April 2026 00:48:03 +0000 (0:00:00.638) 0:01:05.400 ******** 2026-04-04 00:49:19.753759 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:19.753765 | orchestrator | 2026-04-04 00:49:19.753771 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-04 00:49:19.753778 | orchestrator | Saturday 04 April 2026 00:48:04 +0000 (0:00:00.604) 0:01:06.005 ******** 2026-04-04 00:49:19.753785 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753792 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753799 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753806 | orchestrator | 2026-04-04 00:49:19.753813 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-04 00:49:19.753821 | orchestrator | Saturday 04 April 2026 00:48:05 +0000 (0:00:00.729) 0:01:06.735 ******** 2026-04-04 00:49:19.753828 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753835 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753843 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753859 | orchestrator | 2026-04-04 00:49:19.753867 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-04 00:49:19.753874 | orchestrator | Saturday 04 April 2026 00:48:05 +0000 (0:00:00.322) 0:01:07.057 ******** 2026-04-04 00:49:19.753881 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753887 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753895 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753901 | orchestrator | 2026-04-04 00:49:19.753908 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-04 00:49:19.753915 | orchestrator | Saturday 04 April 2026 00:48:06 +0000 (0:00:00.783) 0:01:07.841 ******** 2026-04-04 00:49:19.753922 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753929 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753935 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753942 | orchestrator | 2026-04-04 00:49:19.753948 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-04 00:49:19.753956 | orchestrator | Saturday 04 April 2026 00:48:06 +0000 (0:00:00.364) 0:01:08.206 ******** 2026-04-04 00:49:19.753963 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.753970 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.753977 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.753984 | orchestrator | 2026-04-04 00:49:19.753991 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-04 00:49:19.753998 | orchestrator | Saturday 04 April 2026 00:48:07 +0000 (0:00:00.616) 0:01:08.822 ******** 2026-04-04 00:49:19.754005 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754053 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754065 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754072 | orchestrator | 2026-04-04 00:49:19.754079 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-04 00:49:19.754086 | orchestrator | Saturday 04 April 2026 00:48:07 +0000 (0:00:00.579) 0:01:09.402 ******** 2026-04-04 00:49:19.754093 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754099 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754106 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754113 | orchestrator | 2026-04-04 00:49:19.754120 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-04 00:49:19.754127 | orchestrator | Saturday 04 April 2026 00:48:08 +0000 (0:00:00.664) 0:01:10.067 ******** 2026-04-04 00:49:19.754134 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754141 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754148 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754155 | orchestrator | 2026-04-04 00:49:19.754162 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-04 00:49:19.754179 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.886) 0:01:10.953 ******** 2026-04-04 00:49:19.754187 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754194 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754201 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754208 | orchestrator | 2026-04-04 00:49:19.754215 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-04 00:49:19.754222 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.273) 0:01:11.227 ******** 2026-04-04 00:49:19.754229 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754236 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754243 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754249 | orchestrator | 2026-04-04 00:49:19.754257 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-04 00:49:19.754264 | orchestrator | Saturday 04 April 2026 00:48:09 +0000 (0:00:00.360) 0:01:11.588 ******** 2026-04-04 00:49:19.754270 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754281 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754287 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754294 | orchestrator | 2026-04-04 00:49:19.754300 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-04 00:49:19.754306 | orchestrator | Saturday 04 April 2026 00:48:10 +0000 (0:00:00.438) 0:01:12.026 ******** 2026-04-04 00:49:19.754313 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754320 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754327 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754334 | orchestrator | 2026-04-04 00:49:19.754341 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-04 00:49:19.754348 | orchestrator | Saturday 04 April 2026 00:48:10 +0000 (0:00:00.425) 0:01:12.451 ******** 2026-04-04 00:49:19.754355 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754361 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754368 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754374 | orchestrator | 2026-04-04 00:49:19.754381 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-04 00:49:19.754388 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.244) 0:01:12.695 ******** 2026-04-04 00:49:19.754394 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754401 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754408 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754415 | orchestrator | 2026-04-04 00:49:19.754423 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-04 00:49:19.754430 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.278) 0:01:12.974 ******** 2026-04-04 00:49:19.754436 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754443 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754449 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754456 | orchestrator | 2026-04-04 00:49:19.754463 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-04 00:49:19.754470 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.286) 0:01:13.260 ******** 2026-04-04 00:49:19.754476 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754482 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754488 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754495 | orchestrator | 2026-04-04 00:49:19.754501 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-04 00:49:19.754508 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:00.374) 0:01:13.634 ******** 2026-04-04 00:49:19.754515 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754522 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754536 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754556 | orchestrator | 2026-04-04 00:49:19.754565 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-04 00:49:19.754578 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:00.268) 0:01:13.903 ******** 2026-04-04 00:49:19.754586 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:49:19.754593 | orchestrator | 2026-04-04 00:49:19.754599 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-04 00:49:19.754606 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:00.510) 0:01:14.413 ******** 2026-04-04 00:49:19.754613 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.754621 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.754627 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.754634 | orchestrator | 2026-04-04 00:49:19.754642 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-04 00:49:19.754649 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.533) 0:01:14.947 ******** 2026-04-04 00:49:19.754655 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.754662 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.754668 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.754674 | orchestrator | 2026-04-04 00:49:19.754681 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-04 00:49:19.754689 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.312) 0:01:15.260 ******** 2026-04-04 00:49:19.754695 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754702 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754708 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754715 | orchestrator | 2026-04-04 00:49:19.754722 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-04 00:49:19.754729 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:00.282) 0:01:15.543 ******** 2026-04-04 00:49:19.754735 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754741 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754747 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754754 | orchestrator | 2026-04-04 00:49:19.754761 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-04 00:49:19.754767 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.370) 0:01:15.913 ******** 2026-04-04 00:49:19.754774 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754781 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754788 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754794 | orchestrator | 2026-04-04 00:49:19.754801 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-04 00:49:19.754807 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.638) 0:01:16.552 ******** 2026-04-04 00:49:19.754813 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754819 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754826 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754834 | orchestrator | 2026-04-04 00:49:19.754841 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-04 00:49:19.754848 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:00.374) 0:01:16.926 ******** 2026-04-04 00:49:19.754855 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754862 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754869 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754876 | orchestrator | 2026-04-04 00:49:19.754882 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-04 00:49:19.754895 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:00.633) 0:01:17.559 ******** 2026-04-04 00:49:19.754902 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.754909 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.754915 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.754922 | orchestrator | 2026-04-04 00:49:19.754928 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-04 00:49:19.754934 | orchestrator | Saturday 04 April 2026 00:48:16 +0000 (0:00:00.704) 0:01:18.264 ******** 2026-04-04 00:49:19.754949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.754965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.754972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.754987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.754997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755031 | orchestrator | 2026-04-04 00:49:19.755038 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-04 00:49:19.755044 | orchestrator | Saturday 04 April 2026 00:48:18 +0000 (0:00:01.658) 0:01:19.922 ******** 2026-04-04 00:49:19.755056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755127 | orchestrator | 2026-04-04 00:49:19.755134 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-04 00:49:19.755140 | orchestrator | Saturday 04 April 2026 00:48:22 +0000 (0:00:04.104) 0:01:24.027 ******** 2026-04-04 00:49:19.755147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755218 | orchestrator | 2026-04-04 00:49:19.755225 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.755232 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:02.304) 0:01:26.331 ******** 2026-04-04 00:49:19.755238 | orchestrator | 2026-04-04 00:49:19.755245 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.755256 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:00.059) 0:01:26.391 ******** 2026-04-04 00:49:19.755262 | orchestrator | 2026-04-04 00:49:19.755269 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.755275 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:00.063) 0:01:26.454 ******** 2026-04-04 00:49:19.755281 | orchestrator | 2026-04-04 00:49:19.755287 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-04 00:49:19.755293 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:00.065) 0:01:26.520 ******** 2026-04-04 00:49:19.755299 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.755306 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.755313 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.755319 | orchestrator | 2026-04-04 00:49:19.755326 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-04 00:49:19.755332 | orchestrator | Saturday 04 April 2026 00:48:27 +0000 (0:00:02.663) 0:01:29.184 ******** 2026-04-04 00:49:19.755338 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.755344 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.755351 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.755357 | orchestrator | 2026-04-04 00:49:19.755372 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-04 00:49:19.755380 | orchestrator | Saturday 04 April 2026 00:48:34 +0000 (0:00:07.298) 0:01:36.482 ******** 2026-04-04 00:49:19.755387 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.755392 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.755398 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.755405 | orchestrator | 2026-04-04 00:49:19.755412 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-04 00:49:19.755419 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:07.789) 0:01:44.272 ******** 2026-04-04 00:49:19.755426 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.755432 | orchestrator | 2026-04-04 00:49:19.755438 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-04 00:49:19.755444 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.120) 0:01:44.393 ******** 2026-04-04 00:49:19.755450 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.755458 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.755464 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.755471 | orchestrator | 2026-04-04 00:49:19.755477 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-04 00:49:19.755484 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:00.919) 0:01:45.313 ******** 2026-04-04 00:49:19.755491 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.755497 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.755503 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.755510 | orchestrator | 2026-04-04 00:49:19.755517 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-04 00:49:19.755524 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:00.548) 0:01:45.862 ******** 2026-04-04 00:49:19.755531 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.755538 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.755632 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.755643 | orchestrator | 2026-04-04 00:49:19.755650 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-04 00:49:19.755657 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.873) 0:01:46.735 ******** 2026-04-04 00:49:19.755664 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.755671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.755679 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.755685 | orchestrator | 2026-04-04 00:49:19.755692 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-04 00:49:19.755697 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.591) 0:01:47.326 ******** 2026-04-04 00:49:19.755704 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.755718 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.755733 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.755740 | orchestrator | 2026-04-04 00:49:19.755747 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-04 00:49:19.755754 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.759) 0:01:48.086 ******** 2026-04-04 00:49:19.755761 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.755767 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.755774 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.755782 | orchestrator | 2026-04-04 00:49:19.755788 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-04 00:49:19.755795 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.817) 0:01:48.903 ******** 2026-04-04 00:49:19.755801 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.755808 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.755815 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.755821 | orchestrator | 2026-04-04 00:49:19.755828 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-04 00:49:19.755834 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.407) 0:01:49.311 ******** 2026-04-04 00:49:19.755842 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755856 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755864 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755876 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755923 | orchestrator | 2026-04-04 00:49:19.755930 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-04 00:49:19.755937 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:01.490) 0:01:50.802 ******** 2026-04-04 00:49:19.755944 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755958 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.755980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756008 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756036 | orchestrator | 2026-04-04 00:49:19.756043 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-04 00:49:19.756050 | orchestrator | Saturday 04 April 2026 00:48:53 +0000 (0:00:03.918) 0:01:54.720 ******** 2026-04-04 00:49:19.756062 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756083 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756112 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 00:49:19.756130 | orchestrator | 2026-04-04 00:49:19.756136 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.756142 | orchestrator | Saturday 04 April 2026 00:48:55 +0000 (0:00:02.741) 0:01:57.461 ******** 2026-04-04 00:49:19.756149 | orchestrator | 2026-04-04 00:49:19.756156 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.756162 | orchestrator | Saturday 04 April 2026 00:48:55 +0000 (0:00:00.059) 0:01:57.520 ******** 2026-04-04 00:49:19.756169 | orchestrator | 2026-04-04 00:49:19.756176 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-04 00:49:19.756183 | orchestrator | Saturday 04 April 2026 00:48:55 +0000 (0:00:00.057) 0:01:57.578 ******** 2026-04-04 00:49:19.756189 | orchestrator | 2026-04-04 00:49:19.756196 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-04 00:49:19.756202 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:00.173) 0:01:57.751 ******** 2026-04-04 00:49:19.756208 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.756215 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.756222 | orchestrator | 2026-04-04 00:49:19.756234 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-04 00:49:19.756241 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:06.106) 0:02:03.857 ******** 2026-04-04 00:49:19.756247 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.756253 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.756259 | orchestrator | 2026-04-04 00:49:19.756265 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-04 00:49:19.756271 | orchestrator | Saturday 04 April 2026 00:49:08 +0000 (0:00:06.103) 0:02:09.961 ******** 2026-04-04 00:49:19.756277 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:49:19.756283 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:49:19.756289 | orchestrator | 2026-04-04 00:49:19.756295 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-04 00:49:19.756302 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:06.055) 0:02:16.016 ******** 2026-04-04 00:49:19.756308 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:49:19.756314 | orchestrator | 2026-04-04 00:49:19.756321 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-04 00:49:19.756327 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:00.134) 0:02:16.151 ******** 2026-04-04 00:49:19.756334 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.756342 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.756349 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.756356 | orchestrator | 2026-04-04 00:49:19.756363 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-04 00:49:19.756370 | orchestrator | Saturday 04 April 2026 00:49:15 +0000 (0:00:00.784) 0:02:16.936 ******** 2026-04-04 00:49:19.756377 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.756384 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.756391 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.756398 | orchestrator | 2026-04-04 00:49:19.756404 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-04 00:49:19.756409 | orchestrator | Saturday 04 April 2026 00:49:15 +0000 (0:00:00.612) 0:02:17.548 ******** 2026-04-04 00:49:19.756420 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.756427 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.756434 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.756440 | orchestrator | 2026-04-04 00:49:19.756446 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-04 00:49:19.756452 | orchestrator | Saturday 04 April 2026 00:49:16 +0000 (0:00:00.804) 0:02:18.353 ******** 2026-04-04 00:49:19.756458 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:49:19.756465 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:49:19.756471 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:49:19.756477 | orchestrator | 2026-04-04 00:49:19.756482 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-04 00:49:19.756488 | orchestrator | Saturday 04 April 2026 00:49:17 +0000 (0:00:00.697) 0:02:19.051 ******** 2026-04-04 00:49:19.756494 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.756501 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.756507 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.756514 | orchestrator | 2026-04-04 00:49:19.756520 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-04 00:49:19.756527 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:00.734) 0:02:19.786 ******** 2026-04-04 00:49:19.756534 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:49:19.756539 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:49:19.756561 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:49:19.756567 | orchestrator | 2026-04-04 00:49:19.756572 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:49:19.756583 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-04 00:49:19.756590 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-04 00:49:19.756595 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-04 00:49:19.756601 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:49:19.756607 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:49:19.756612 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:49:19.756619 | orchestrator | 2026-04-04 00:49:19.756625 | orchestrator | 2026-04-04 00:49:19.756630 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:49:19.756636 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:01.316) 0:02:21.102 ******** 2026-04-04 00:49:19.756643 | orchestrator | =============================================================================== 2026-04-04 00:49:19.756649 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.02s 2026-04-04 00:49:19.756655 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.11s 2026-04-04 00:49:19.756661 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.85s 2026-04-04 00:49:19.756667 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.40s 2026-04-04 00:49:19.756673 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.77s 2026-04-04 00:49:19.756680 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.10s 2026-04-04 00:49:19.756686 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.92s 2026-04-04 00:49:19.756699 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.74s 2026-04-04 00:49:19.756706 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.30s 2026-04-04 00:49:19.756717 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.28s 2026-04-04 00:49:19.756724 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.11s 2026-04-04 00:49:19.756731 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.04s 2026-04-04 00:49:19.756737 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.92s 2026-04-04 00:49:19.756743 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2026-04-04 00:49:19.756749 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2026-04-04 00:49:19.756756 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.40s 2026-04-04 00:49:19.756762 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.35s 2026-04-04 00:49:19.756768 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.32s 2026-04-04 00:49:19.756774 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.27s 2026-04-04 00:49:19.756781 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.21s 2026-04-04 00:49:19.756788 | orchestrator | 2026-04-04 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:22.797316 | orchestrator | 2026-04-04 00:49:22 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:22.798949 | orchestrator | 2026-04-04 00:49:22 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:22.799004 | orchestrator | 2026-04-04 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:25.862484 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:25.863058 | orchestrator | 2026-04-04 00:49:25 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:25.863089 | orchestrator | 2026-04-04 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:28.895097 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:28.895969 | orchestrator | 2026-04-04 00:49:28 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:28.896015 | orchestrator | 2026-04-04 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:31.944693 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:31.946981 | orchestrator | 2026-04-04 00:49:31 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:31.947061 | orchestrator | 2026-04-04 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:34.988094 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:34.988954 | orchestrator | 2026-04-04 00:49:34 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:34.988986 | orchestrator | 2026-04-04 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:38.027512 | orchestrator | 2026-04-04 00:49:38 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:38.032665 | orchestrator | 2026-04-04 00:49:38 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:38.032733 | orchestrator | 2026-04-04 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:41.072231 | orchestrator | 2026-04-04 00:49:41 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:41.074689 | orchestrator | 2026-04-04 00:49:41 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:41.074773 | orchestrator | 2026-04-04 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:44.143831 | orchestrator | 2026-04-04 00:49:44 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:44.143899 | orchestrator | 2026-04-04 00:49:44 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:44.143905 | orchestrator | 2026-04-04 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:47.151922 | orchestrator | 2026-04-04 00:49:47 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:47.152235 | orchestrator | 2026-04-04 00:49:47 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:47.152594 | orchestrator | 2026-04-04 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:50.190084 | orchestrator | 2026-04-04 00:49:50 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:50.190708 | orchestrator | 2026-04-04 00:49:50 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:50.190739 | orchestrator | 2026-04-04 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:53.236804 | orchestrator | 2026-04-04 00:49:53 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:53.236913 | orchestrator | 2026-04-04 00:49:53 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:53.236927 | orchestrator | 2026-04-04 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:56.271775 | orchestrator | 2026-04-04 00:49:56 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:56.272084 | orchestrator | 2026-04-04 00:49:56 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:56.273172 | orchestrator | 2026-04-04 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:49:59.318640 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:49:59.320370 | orchestrator | 2026-04-04 00:49:59 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:49:59.320404 | orchestrator | 2026-04-04 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:02.347521 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:02.347962 | orchestrator | 2026-04-04 00:50:02 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:02.347997 | orchestrator | 2026-04-04 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:05.374488 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:05.374595 | orchestrator | 2026-04-04 00:50:05 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:05.374602 | orchestrator | 2026-04-04 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:08.399950 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:08.400028 | orchestrator | 2026-04-04 00:50:08 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:08.400063 | orchestrator | 2026-04-04 00:50:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:11.437077 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:11.439031 | orchestrator | 2026-04-04 00:50:11 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:11.439118 | orchestrator | 2026-04-04 00:50:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:14.476663 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:14.478139 | orchestrator | 2026-04-04 00:50:14 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:14.478705 | orchestrator | 2026-04-04 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:17.534038 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:17.535642 | orchestrator | 2026-04-04 00:50:17 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:17.535987 | orchestrator | 2026-04-04 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:20.575538 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:20.576306 | orchestrator | 2026-04-04 00:50:20 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:20.576448 | orchestrator | 2026-04-04 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:23.625507 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:23.628251 | orchestrator | 2026-04-04 00:50:23 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:23.628297 | orchestrator | 2026-04-04 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:26.675886 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:26.677411 | orchestrator | 2026-04-04 00:50:26 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:26.677447 | orchestrator | 2026-04-04 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:29.722852 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:29.724243 | orchestrator | 2026-04-04 00:50:29 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:29.724271 | orchestrator | 2026-04-04 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:32.763108 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:32.765516 | orchestrator | 2026-04-04 00:50:32 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:32.765717 | orchestrator | 2026-04-04 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:35.809368 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:35.811387 | orchestrator | 2026-04-04 00:50:35 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:35.811432 | orchestrator | 2026-04-04 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:38.851303 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:38.852193 | orchestrator | 2026-04-04 00:50:38 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:38.852285 | orchestrator | 2026-04-04 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:41.896000 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:41.896367 | orchestrator | 2026-04-04 00:50:41 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:41.896397 | orchestrator | 2026-04-04 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:44.944038 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:44.944422 | orchestrator | 2026-04-04 00:50:44 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:44.945121 | orchestrator | 2026-04-04 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:47.990253 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:47.991881 | orchestrator | 2026-04-04 00:50:47 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:47.991944 | orchestrator | 2026-04-04 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:51.039388 | orchestrator | 2026-04-04 00:50:51 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:51.041331 | orchestrator | 2026-04-04 00:50:51 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:51.041390 | orchestrator | 2026-04-04 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:54.082085 | orchestrator | 2026-04-04 00:50:54 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:54.082216 | orchestrator | 2026-04-04 00:50:54 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:54.082287 | orchestrator | 2026-04-04 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:50:57.129729 | orchestrator | 2026-04-04 00:50:57 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:50:57.133049 | orchestrator | 2026-04-04 00:50:57 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:50:57.133122 | orchestrator | 2026-04-04 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:00.167162 | orchestrator | 2026-04-04 00:51:00 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:00.170209 | orchestrator | 2026-04-04 00:51:00 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:00.170284 | orchestrator | 2026-04-04 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:03.211660 | orchestrator | 2026-04-04 00:51:03 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:03.212004 | orchestrator | 2026-04-04 00:51:03 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:03.212050 | orchestrator | 2026-04-04 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:06.246890 | orchestrator | 2026-04-04 00:51:06 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:06.247071 | orchestrator | 2026-04-04 00:51:06 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:06.247088 | orchestrator | 2026-04-04 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:09.284249 | orchestrator | 2026-04-04 00:51:09 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:09.285291 | orchestrator | 2026-04-04 00:51:09 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:09.285382 | orchestrator | 2026-04-04 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:12.315335 | orchestrator | 2026-04-04 00:51:12 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:12.315792 | orchestrator | 2026-04-04 00:51:12 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:12.315819 | orchestrator | 2026-04-04 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:15.368063 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:15.371569 | orchestrator | 2026-04-04 00:51:15 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:15.371623 | orchestrator | 2026-04-04 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:18.406605 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:18.407090 | orchestrator | 2026-04-04 00:51:18 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:18.407530 | orchestrator | 2026-04-04 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:21.447253 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:21.447355 | orchestrator | 2026-04-04 00:51:21 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:21.447482 | orchestrator | 2026-04-04 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:24.479562 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:24.481280 | orchestrator | 2026-04-04 00:51:24 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:24.481331 | orchestrator | 2026-04-04 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:27.514720 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:27.516781 | orchestrator | 2026-04-04 00:51:27 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:27.517037 | orchestrator | 2026-04-04 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:30.570757 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:30.570994 | orchestrator | 2026-04-04 00:51:30 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:30.571015 | orchestrator | 2026-04-04 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:33.622697 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:33.624025 | orchestrator | 2026-04-04 00:51:33 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:33.624072 | orchestrator | 2026-04-04 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:36.666715 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:36.669097 | orchestrator | 2026-04-04 00:51:36 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:36.669178 | orchestrator | 2026-04-04 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:39.710810 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:39.712106 | orchestrator | 2026-04-04 00:51:39 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:39.712137 | orchestrator | 2026-04-04 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:42.753422 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:42.755786 | orchestrator | 2026-04-04 00:51:42 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:42.755857 | orchestrator | 2026-04-04 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:45.782402 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:45.783872 | orchestrator | 2026-04-04 00:51:45 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:45.783966 | orchestrator | 2026-04-04 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:48.825869 | orchestrator | 2026-04-04 00:51:48 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:48.826005 | orchestrator | 2026-04-04 00:51:48 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:48.826055 | orchestrator | 2026-04-04 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:51.856864 | orchestrator | 2026-04-04 00:51:51 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:51.857676 | orchestrator | 2026-04-04 00:51:51 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:51.857714 | orchestrator | 2026-04-04 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:54.892749 | orchestrator | 2026-04-04 00:51:54 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:54.892899 | orchestrator | 2026-04-04 00:51:54 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state STARTED 2026-04-04 00:51:54.892978 | orchestrator | 2026-04-04 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:51:57.929445 | orchestrator | 2026-04-04 00:51:57 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:51:57.937859 | orchestrator | 2026-04-04 00:51:57 | INFO  | Task 9ea14550-5acd-457e-8e9d-21de3f3077ec is in state SUCCESS 2026-04-04 00:51:57.938270 | orchestrator | 2026-04-04 00:51:57.939691 | orchestrator | 2026-04-04 00:51:57.939743 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:51:57.939756 | orchestrator | 2026-04-04 00:51:57.939767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:51:57.939778 | orchestrator | Saturday 04 April 2026 00:45:58 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-04-04 00:51:57.939788 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.939816 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.939828 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.939838 | orchestrator | 2026-04-04 00:51:57.939848 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:51:57.939859 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.360) 0:00:00.686 ******** 2026-04-04 00:51:57.939870 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-04 00:51:57.939883 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-04 00:51:57.939894 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-04 00:51:57.939903 | orchestrator | 2026-04-04 00:51:57.939913 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-04 00:51:57.939925 | orchestrator | 2026-04-04 00:51:57.939935 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-04 00:51:57.939969 | orchestrator | Saturday 04 April 2026 00:45:59 +0000 (0:00:00.358) 0:00:01.045 ******** 2026-04-04 00:51:57.939982 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.939991 | orchestrator | 2026-04-04 00:51:57.940081 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-04 00:51:57.940094 | orchestrator | Saturday 04 April 2026 00:46:00 +0000 (0:00:00.893) 0:00:01.938 ******** 2026-04-04 00:51:57.940105 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.940116 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.940127 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.940138 | orchestrator | 2026-04-04 00:51:57.940148 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-04 00:51:57.940159 | orchestrator | Saturday 04 April 2026 00:46:02 +0000 (0:00:02.260) 0:00:04.199 ******** 2026-04-04 00:51:57.940170 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.940180 | orchestrator | 2026-04-04 00:51:57.940191 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-04 00:51:57.940202 | orchestrator | Saturday 04 April 2026 00:46:03 +0000 (0:00:00.635) 0:00:04.835 ******** 2026-04-04 00:51:57.940314 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.940327 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.940337 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.940349 | orchestrator | 2026-04-04 00:51:57.940359 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-04 00:51:57.940369 | orchestrator | Saturday 04 April 2026 00:46:04 +0000 (0:00:01.584) 0:00:06.419 ******** 2026-04-04 00:51:57.940379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940401 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940413 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940423 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940435 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:51:57.940447 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:51:57.940458 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:51:57.940470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:51:57.940481 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-04 00:51:57.940493 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-04 00:51:57.940505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-04 00:51:57.940516 | orchestrator | 2026-04-04 00:51:57.940630 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 00:51:57.940637 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:03.333) 0:00:09.753 ******** 2026-04-04 00:51:57.940643 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-04 00:51:57.940650 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-04 00:51:57.940656 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-04 00:51:57.940663 | orchestrator | 2026-04-04 00:51:57.940669 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 00:51:57.940675 | orchestrator | Saturday 04 April 2026 00:46:08 +0000 (0:00:00.720) 0:00:10.474 ******** 2026-04-04 00:51:57.940681 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-04 00:51:57.940714 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-04 00:51:57.940722 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-04 00:51:57.940728 | orchestrator | 2026-04-04 00:51:57.940735 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 00:51:57.940741 | orchestrator | Saturday 04 April 2026 00:46:10 +0000 (0:00:01.632) 0:00:12.107 ******** 2026-04-04 00:51:57.940773 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-04 00:51:57.940781 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.940802 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-04 00:51:57.940809 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.940815 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-04 00:51:57.940821 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.940827 | orchestrator | 2026-04-04 00:51:57.940833 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-04 00:51:57.940847 | orchestrator | Saturday 04 April 2026 00:46:11 +0000 (0:00:00.648) 0:00:12.756 ******** 2026-04-04 00:51:57.940858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.940927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.940936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.940943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.940950 | orchestrator | 2026-04-04 00:51:57.940957 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-04 00:51:57.940965 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:01.930) 0:00:14.687 ******** 2026-04-04 00:51:57.940973 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.940980 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.940987 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.940994 | orchestrator | 2026-04-04 00:51:57.941001 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-04 00:51:57.941008 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:01.116) 0:00:15.803 ******** 2026-04-04 00:51:57.941042 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-04 00:51:57.941050 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-04 00:51:57.941058 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-04 00:51:57.941066 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-04 00:51:57.941107 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-04 00:51:57.941115 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-04 00:51:57.941122 | orchestrator | 2026-04-04 00:51:57.941129 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-04 00:51:57.941136 | orchestrator | Saturday 04 April 2026 00:46:16 +0000 (0:00:02.091) 0:00:17.895 ******** 2026-04-04 00:51:57.941149 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.941157 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.941164 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.941171 | orchestrator | 2026-04-04 00:51:57.941178 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-04 00:51:57.941185 | orchestrator | Saturday 04 April 2026 00:46:17 +0000 (0:00:01.183) 0:00:19.078 ******** 2026-04-04 00:51:57.941191 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.941198 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.941204 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.941210 | orchestrator | 2026-04-04 00:51:57.941216 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-04 00:51:57.941222 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:01.147) 0:00:20.226 ******** 2026-04-04 00:51:57.941229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.941436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.941452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941467 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.941474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.941486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.941492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941506 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.941523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.941530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.941536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941553 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.941560 | orchestrator | 2026-04-04 00:51:57.941566 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-04 00:51:57.941572 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:00.843) 0:00:21.070 ******** 2026-04-04 00:51:57.941578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.941710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4', '__omit_place_holder__95835cd190c45e58909e0970b81b0591a6f87de4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-04 00:51:57.941722 | orchestrator | 2026-04-04 00:51:57.941728 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-04 00:51:57.941734 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:02.990) 0:00:24.060 ******** 2026-04-04 00:51:57.941739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.941787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.941793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.941799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.941804 | orchestrator | 2026-04-04 00:51:57.941810 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-04 00:51:57.941815 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:03.768) 0:00:27.829 ******** 2026-04-04 00:51:57.941821 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:51:57.941826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:51:57.941832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-04 00:51:57.941837 | orchestrator | 2026-04-04 00:51:57.941842 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-04 00:51:57.941848 | orchestrator | Saturday 04 April 2026 00:46:29 +0000 (0:00:03.446) 0:00:31.276 ******** 2026-04-04 00:51:57.941853 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:51:57.941859 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:51:57.941864 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-04 00:51:57.941870 | orchestrator | 2026-04-04 00:51:57.946293 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-04 00:51:57.946389 | orchestrator | Saturday 04 April 2026 00:46:33 +0000 (0:00:04.208) 0:00:35.485 ******** 2026-04-04 00:51:57.946406 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.946418 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.946428 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.946439 | orchestrator | 2026-04-04 00:51:57.946474 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-04 00:51:57.946510 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:01.366) 0:00:36.851 ******** 2026-04-04 00:51:57.946522 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:51:57.946531 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:51:57.946538 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-04 00:51:57.946544 | orchestrator | 2026-04-04 00:51:57.946551 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-04 00:51:57.946558 | orchestrator | Saturday 04 April 2026 00:46:37 +0000 (0:00:02.212) 0:00:39.064 ******** 2026-04-04 00:51:57.946565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:51:57.946572 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:51:57.946578 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-04 00:51:57.946584 | orchestrator | 2026-04-04 00:51:57.946590 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-04 00:51:57.946597 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:02.281) 0:00:41.345 ******** 2026-04-04 00:51:57.946604 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-04 00:51:57.946610 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-04 00:51:57.946627 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-04 00:51:57.946634 | orchestrator | 2026-04-04 00:51:57.946647 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-04 00:51:57.946653 | orchestrator | Saturday 04 April 2026 00:46:41 +0000 (0:00:02.157) 0:00:43.503 ******** 2026-04-04 00:51:57.946660 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-04 00:51:57.946666 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-04 00:51:57.946672 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-04 00:51:57.946678 | orchestrator | 2026-04-04 00:51:57.946684 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-04 00:51:57.946690 | orchestrator | Saturday 04 April 2026 00:46:44 +0000 (0:00:02.422) 0:00:45.926 ******** 2026-04-04 00:51:57.946697 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.946703 | orchestrator | 2026-04-04 00:51:57.946709 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-04 00:51:57.946715 | orchestrator | Saturday 04 April 2026 00:46:45 +0000 (0:00:00.670) 0:00:46.596 ******** 2026-04-04 00:51:57.946724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.946790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.946824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.946832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.946844 | orchestrator | 2026-04-04 00:51:57.946850 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-04 00:51:57.946856 | orchestrator | Saturday 04 April 2026 00:46:48 +0000 (0:00:03.702) 0:00:50.299 ******** 2026-04-04 00:51:57.946876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.946884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.946890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.946897 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.946904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.946910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.946917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.946928 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.946934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.946950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.946957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.946964 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.946970 | orchestrator | 2026-04-04 00:51:57.946977 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-04 00:51:57.946983 | orchestrator | Saturday 04 April 2026 00:46:49 +0000 (0:00:01.115) 0:00:51.415 ******** 2026-04-04 00:51:57.946990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.946997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947014 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947072 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947122 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947133 | orchestrator | 2026-04-04 00:51:57.947144 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-04 00:51:57.947154 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:01.249) 0:00:52.664 ******** 2026-04-04 00:51:57.947165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947212 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947274 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947320 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947326 | orchestrator | 2026-04-04 00:51:57.947332 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-04 00:51:57.947339 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.924) 0:00:53.588 ******** 2026-04-04 00:51:57.947349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947377 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947404 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947444 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947450 | orchestrator | 2026-04-04 00:51:57.947456 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-04 00:51:57.947463 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.901) 0:00:54.490 ******** 2026-04-04 00:51:57.947469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947489 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947527 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947553 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947559 | orchestrator | 2026-04-04 00:51:57.947566 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-04 00:51:57.947572 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:01.234) 0:00:55.724 ******** 2026-04-04 00:51:57.947578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947615 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947658 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947711 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947720 | orchestrator | 2026-04-04 00:51:57.947729 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-04 00:51:57.947748 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:00.627) 0:00:56.352 ******** 2026-04-04 00:51:57.947760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947793 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947832 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.947839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947863 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947870 | orchestrator | 2026-04-04 00:51:57.947876 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-04 00:51:57.947882 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.721) 0:00:57.073 ******** 2026-04-04 00:51:57.947889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947908 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.947926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947950 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.947956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-04 00:51:57.947977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-04 00:51:57.947991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-04 00:51:57.947998 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.948004 | orchestrator | 2026-04-04 00:51:57.948010 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-04 00:51:57.948017 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:01.314) 0:00:58.388 ******** 2026-04-04 00:51:57.948023 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:51:57.948034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:51:57.948058 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-04 00:51:57.948065 | orchestrator | 2026-04-04 00:51:57.948071 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-04 00:51:57.948077 | orchestrator | Saturday 04 April 2026 00:46:58 +0000 (0:00:01.626) 0:01:00.014 ******** 2026-04-04 00:51:57.948088 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:51:57.948094 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:51:57.948101 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-04 00:51:57.948107 | orchestrator | 2026-04-04 00:51:57.948113 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-04 00:51:57.948119 | orchestrator | Saturday 04 April 2026 00:47:00 +0000 (0:00:02.102) 0:01:02.117 ******** 2026-04-04 00:51:57.948125 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:51:57.948131 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:51:57.948138 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:51:57.948144 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.948150 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 00:51:57.948156 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:51:57.948162 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.948169 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 00:51:57.948175 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.948181 | orchestrator | 2026-04-04 00:51:57.948187 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-04 00:51:57.948193 | orchestrator | Saturday 04 April 2026 00:47:01 +0000 (0:00:01.240) 0:01:03.357 ******** 2026-04-04 00:51:57.948199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-04 00:51:57.948312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.948318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.948325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-04 00:51:57.948331 | orchestrator | 2026-04-04 00:51:57.948337 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-04 00:51:57.948349 | orchestrator | Saturday 04 April 2026 00:47:05 +0000 (0:00:03.585) 0:01:06.942 ******** 2026-04-04 00:51:57.948355 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.948361 | orchestrator | 2026-04-04 00:51:57.948368 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-04 00:51:57.948374 | orchestrator | Saturday 04 April 2026 00:47:05 +0000 (0:00:00.531) 0:01:07.474 ******** 2026-04-04 00:51:57.948381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-04 00:51:57.948398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-04 00:51:57.948405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-04 00:51:57.948466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948486 | orchestrator | 2026-04-04 00:51:57.948493 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-04 00:51:57.948498 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:03.615) 0:01:11.089 ******** 2026-04-04 00:51:57.948504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-04 00:51:57.948518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948535 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.948541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-04 00:51:57.948551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948568 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.948580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-04 00:51:57.948586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.948591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948607 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.948613 | orchestrator | 2026-04-04 00:51:57.948622 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-04 00:51:57.948631 | orchestrator | Saturday 04 April 2026 00:47:10 +0000 (0:00:00.688) 0:01:11.777 ******** 2026-04-04 00:51:57.948641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948661 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.948671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948689 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.948699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-04 00:51:57.948716 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.948722 | orchestrator | 2026-04-04 00:51:57.948732 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-04 00:51:57.948737 | orchestrator | Saturday 04 April 2026 00:47:11 +0000 (0:00:00.898) 0:01:12.675 ******** 2026-04-04 00:51:57.948743 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.948748 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.948753 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.948759 | orchestrator | 2026-04-04 00:51:57.948769 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-04 00:51:57.948774 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:01.269) 0:01:13.944 ******** 2026-04-04 00:51:57.948780 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.948785 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.948790 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.948796 | orchestrator | 2026-04-04 00:51:57.948801 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-04 00:51:57.948807 | orchestrator | Saturday 04 April 2026 00:47:14 +0000 (0:00:02.254) 0:01:16.198 ******** 2026-04-04 00:51:57.948812 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.948818 | orchestrator | 2026-04-04 00:51:57.948823 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-04 00:51:57.948828 | orchestrator | Saturday 04 April 2026 00:47:15 +0000 (0:00:00.611) 0:01:16.810 ******** 2026-04-04 00:51:57.948840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.948846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.948859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.948909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948920 | orchestrator | 2026-04-04 00:51:57.948925 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-04 00:51:57.948931 | orchestrator | Saturday 04 April 2026 00:47:18 +0000 (0:00:03.464) 0:01:20.274 ******** 2026-04-04 00:51:57.948947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.948953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948968 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.948974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.948980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.948991 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.949014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949026 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949031 | orchestrator | 2026-04-04 00:51:57.949037 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-04 00:51:57.949043 | orchestrator | Saturday 04 April 2026 00:47:19 +0000 (0:00:01.167) 0:01:21.442 ******** 2026-04-04 00:51:57.949048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949061 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949077 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-04 00:51:57.949094 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949099 | orchestrator | 2026-04-04 00:51:57.949104 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-04 00:51:57.949110 | orchestrator | Saturday 04 April 2026 00:47:20 +0000 (0:00:00.891) 0:01:22.334 ******** 2026-04-04 00:51:57.949115 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.949121 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.949126 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.949136 | orchestrator | 2026-04-04 00:51:57.949141 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-04 00:51:57.949147 | orchestrator | Saturday 04 April 2026 00:47:22 +0000 (0:00:01.295) 0:01:23.630 ******** 2026-04-04 00:51:57.949153 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.949158 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.949164 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.949169 | orchestrator | 2026-04-04 00:51:57.949178 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-04 00:51:57.949184 | orchestrator | Saturday 04 April 2026 00:47:23 +0000 (0:00:01.632) 0:01:25.262 ******** 2026-04-04 00:51:57.949189 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949194 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949200 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949205 | orchestrator | 2026-04-04 00:51:57.949214 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-04 00:51:57.949220 | orchestrator | Saturday 04 April 2026 00:47:24 +0000 (0:00:00.250) 0:01:25.513 ******** 2026-04-04 00:51:57.949225 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.949231 | orchestrator | 2026-04-04 00:51:57.949236 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-04 00:51:57.949242 | orchestrator | Saturday 04 April 2026 00:47:24 +0000 (0:00:00.777) 0:01:26.290 ******** 2026-04-04 00:51:57.949248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:51:57.949318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:51:57.949325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-04 00:51:57.949335 | orchestrator | 2026-04-04 00:51:57.949341 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-04 00:51:57.949346 | orchestrator | Saturday 04 April 2026 00:47:27 +0000 (0:00:02.704) 0:01:28.995 ******** 2026-04-04 00:51:57.949356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:51:57.949362 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:51:57.949377 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-04 00:51:57.949388 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949394 | orchestrator | 2026-04-04 00:51:57.949399 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-04 00:51:57.949405 | orchestrator | Saturday 04 April 2026 00:47:28 +0000 (0:00:01.350) 0:01:30.345 ******** 2026-04-04 00:51:57.949412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949433 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949451 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-04 00:51:57.949476 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949481 | orchestrator | 2026-04-04 00:51:57.949487 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-04 00:51:57.949492 | orchestrator | Saturday 04 April 2026 00:47:31 +0000 (0:00:02.254) 0:01:32.600 ******** 2026-04-04 00:51:57.949498 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949503 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949508 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949514 | orchestrator | 2026-04-04 00:51:57.949519 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-04 00:51:57.949524 | orchestrator | Saturday 04 April 2026 00:47:31 +0000 (0:00:00.386) 0:01:32.986 ******** 2026-04-04 00:51:57.949530 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949535 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.949540 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.949546 | orchestrator | 2026-04-04 00:51:57.949551 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-04 00:51:57.949557 | orchestrator | Saturday 04 April 2026 00:47:32 +0000 (0:00:01.173) 0:01:34.159 ******** 2026-04-04 00:51:57.949562 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.949568 | orchestrator | 2026-04-04 00:51:57.949573 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-04 00:51:57.949579 | orchestrator | Saturday 04 April 2026 00:47:33 +0000 (0:00:00.915) 0:01:35.074 ******** 2026-04-04 00:51:57.949584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.949595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.949639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.949694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949730 | orchestrator | 2026-04-04 00:51:57.949740 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-04 00:51:57.949751 | orchestrator | Saturday 04 April 2026 00:47:37 +0000 (0:00:03.807) 0:01:38.882 ******** 2026-04-04 00:51:57.949762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.949768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949797 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.949803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.949813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.949984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.950002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950009 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950068 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950073 | orchestrator | 2026-04-04 00:51:57.950079 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-04 00:51:57.950085 | orchestrator | Saturday 04 April 2026 00:47:38 +0000 (0:00:00.670) 0:01:39.553 ******** 2026-04-04 00:51:57.950091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950105 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950121 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-04 00:51:57.950146 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950152 | orchestrator | 2026-04-04 00:51:57.950157 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-04 00:51:57.950163 | orchestrator | Saturday 04 April 2026 00:47:39 +0000 (0:00:01.177) 0:01:40.731 ******** 2026-04-04 00:51:57.950168 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.950174 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.950184 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.950190 | orchestrator | 2026-04-04 00:51:57.950195 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-04 00:51:57.950201 | orchestrator | Saturday 04 April 2026 00:47:40 +0000 (0:00:01.321) 0:01:42.052 ******** 2026-04-04 00:51:57.950206 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.950211 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.950217 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.950222 | orchestrator | 2026-04-04 00:51:57.950228 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-04 00:51:57.950233 | orchestrator | Saturday 04 April 2026 00:47:42 +0000 (0:00:02.119) 0:01:44.172 ******** 2026-04-04 00:51:57.950239 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950245 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950276 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950283 | orchestrator | 2026-04-04 00:51:57.950289 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-04 00:51:57.950294 | orchestrator | Saturday 04 April 2026 00:47:43 +0000 (0:00:00.341) 0:01:44.513 ******** 2026-04-04 00:51:57.950300 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950305 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950310 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950316 | orchestrator | 2026-04-04 00:51:57.950321 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-04 00:51:57.950326 | orchestrator | Saturday 04 April 2026 00:47:43 +0000 (0:00:00.325) 0:01:44.839 ******** 2026-04-04 00:51:57.950332 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.950337 | orchestrator | 2026-04-04 00:51:57.950343 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-04 00:51:57.950348 | orchestrator | Saturday 04 April 2026 00:47:44 +0000 (0:00:00.963) 0:01:45.803 ******** 2026-04-04 00:51:57.950354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 00:51:57.950361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 00:51:57.950419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 00:51:57.950474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950518 | orchestrator | 2026-04-04 00:51:57.950524 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-04 00:51:57.950529 | orchestrator | Saturday 04 April 2026 00:47:48 +0000 (0:00:04.244) 0:01:50.047 ******** 2026-04-04 00:51:57.950539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 00:51:57.950567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950641 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 00:51:57.950674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 00:51:57.950695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 00:51:57.950722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950779 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.950819 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950824 | orchestrator | 2026-04-04 00:51:57.950830 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-04 00:51:57.950835 | orchestrator | Saturday 04 April 2026 00:47:49 +0000 (0:00:00.925) 0:01:50.973 ******** 2026-04-04 00:51:57.950842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950853 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950870 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-04 00:51:57.950886 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950891 | orchestrator | 2026-04-04 00:51:57.950897 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-04 00:51:57.950902 | orchestrator | Saturday 04 April 2026 00:47:50 +0000 (0:00:01.497) 0:01:52.471 ******** 2026-04-04 00:51:57.950908 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.950913 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.950918 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.950928 | orchestrator | 2026-04-04 00:51:57.950934 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-04 00:51:57.950939 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:01.187) 0:01:53.658 ******** 2026-04-04 00:51:57.950945 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.950950 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.950955 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.950961 | orchestrator | 2026-04-04 00:51:57.950966 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-04 00:51:57.950972 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:01.839) 0:01:55.498 ******** 2026-04-04 00:51:57.950977 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.950983 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.950988 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.950993 | orchestrator | 2026-04-04 00:51:57.950999 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-04 00:51:57.951004 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:00.267) 0:01:55.765 ******** 2026-04-04 00:51:57.951009 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.951015 | orchestrator | 2026-04-04 00:51:57.951020 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-04 00:51:57.951026 | orchestrator | Saturday 04 April 2026 00:47:55 +0000 (0:00:00.854) 0:01:56.620 ******** 2026-04-04 00:51:57.951042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:51:57.951051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:51:57.951076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 00:51:57.951102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951113 | orchestrator | 2026-04-04 00:51:57.951119 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-04 00:51:57.951124 | orchestrator | Saturday 04 April 2026 00:47:59 +0000 (0:00:04.108) 0:02:00.728 ******** 2026-04-04 00:51:57.951130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:51:57.951142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951148 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:51:57.951191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951198 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 00:51:57.951222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.951233 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951242 | orchestrator | 2026-04-04 00:51:57.951266 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-04 00:51:57.951281 | orchestrator | Saturday 04 April 2026 00:48:05 +0000 (0:00:06.374) 0:02:07.102 ******** 2026-04-04 00:51:57.951291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951316 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951347 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-04 00:51:57.951372 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951377 | orchestrator | 2026-04-04 00:51:57.951383 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-04 00:51:57.951388 | orchestrator | Saturday 04 April 2026 00:48:11 +0000 (0:00:05.409) 0:02:12.512 ******** 2026-04-04 00:51:57.951394 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.951399 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.951405 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.951410 | orchestrator | 2026-04-04 00:51:57.951415 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-04 00:51:57.951421 | orchestrator | Saturday 04 April 2026 00:48:12 +0000 (0:00:01.113) 0:02:13.625 ******** 2026-04-04 00:51:57.951426 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.951432 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.951442 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.951447 | orchestrator | 2026-04-04 00:51:57.951453 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-04 00:51:57.951458 | orchestrator | Saturday 04 April 2026 00:48:13 +0000 (0:00:01.655) 0:02:15.281 ******** 2026-04-04 00:51:57.951463 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951474 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951483 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951488 | orchestrator | 2026-04-04 00:51:57.951494 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-04 00:51:57.951499 | orchestrator | Saturday 04 April 2026 00:48:14 +0000 (0:00:00.286) 0:02:15.567 ******** 2026-04-04 00:51:57.951504 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.951510 | orchestrator | 2026-04-04 00:51:57.951515 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-04 00:51:57.951521 | orchestrator | Saturday 04 April 2026 00:48:15 +0000 (0:00:01.027) 0:02:16.595 ******** 2026-04-04 00:51:57.951527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 00:51:57.951533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 00:51:57.951539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 00:51:57.951545 | orchestrator | 2026-04-04 00:51:57.951550 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-04 00:51:57.951556 | orchestrator | Saturday 04 April 2026 00:48:18 +0000 (0:00:03.801) 0:02:20.396 ******** 2026-04-04 00:51:57.951561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 00:51:57.951567 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 00:51:57.951775 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 00:51:57.951802 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951811 | orchestrator | 2026-04-04 00:51:57.951820 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-04 00:51:57.951830 | orchestrator | Saturday 04 April 2026 00:48:19 +0000 (0:00:00.769) 0:02:21.165 ******** 2026-04-04 00:51:57.951837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951860 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951865 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-04 00:51:57.951881 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951887 | orchestrator | 2026-04-04 00:51:57.951892 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-04 00:51:57.951898 | orchestrator | Saturday 04 April 2026 00:48:20 +0000 (0:00:00.863) 0:02:22.029 ******** 2026-04-04 00:51:57.951903 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.951908 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.951914 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.951919 | orchestrator | 2026-04-04 00:51:57.951924 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-04 00:51:57.951930 | orchestrator | Saturday 04 April 2026 00:48:21 +0000 (0:00:01.246) 0:02:23.276 ******** 2026-04-04 00:51:57.951935 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.951941 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.951946 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.951951 | orchestrator | 2026-04-04 00:51:57.951961 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-04 00:51:57.951967 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:01.714) 0:02:24.991 ******** 2026-04-04 00:51:57.951972 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.951977 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.951983 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.951989 | orchestrator | 2026-04-04 00:51:57.951994 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-04 00:51:57.951999 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:00.244) 0:02:25.236 ******** 2026-04-04 00:51:57.952005 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.952010 | orchestrator | 2026-04-04 00:51:57.952016 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-04 00:51:57.952021 | orchestrator | Saturday 04 April 2026 00:48:24 +0000 (0:00:00.910) 0:02:26.146 ******** 2026-04-04 00:51:57.952093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:51:57.952104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:51:57.952146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:51:57.952155 | orchestrator | 2026-04-04 00:51:57.952160 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-04 00:51:57.952166 | orchestrator | Saturday 04 April 2026 00:48:27 +0000 (0:00:03.082) 0:02:29.228 ******** 2026-04-04 00:51:57.952197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:51:57.952213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:51:57.952220 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.952230 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.952344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:51:57.952358 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.952364 | orchestrator | 2026-04-04 00:51:57.952370 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-04 00:51:57.952375 | orchestrator | Saturday 04 April 2026 00:48:28 +0000 (0:00:00.575) 0:02:29.804 ******** 2026-04-04 00:51:57.952382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:51:57.952430 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.952436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:51:57.952456 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.952462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-04 00:51:57.952522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-04 00:51:57.952527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-04 00:51:57.952532 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.952537 | orchestrator | 2026-04-04 00:51:57.952542 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-04 00:51:57.952547 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:00.867) 0:02:30.672 ******** 2026-04-04 00:51:57.952552 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.952557 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.952562 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.952567 | orchestrator | 2026-04-04 00:51:57.952576 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-04 00:51:57.952581 | orchestrator | Saturday 04 April 2026 00:48:30 +0000 (0:00:01.366) 0:02:32.039 ******** 2026-04-04 00:51:57.952586 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.952590 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.952595 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.952600 | orchestrator | 2026-04-04 00:51:57.952605 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-04 00:51:57.952611 | orchestrator | Saturday 04 April 2026 00:48:32 +0000 (0:00:01.889) 0:02:33.929 ******** 2026-04-04 00:51:57.952615 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.952620 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.952625 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.952630 | orchestrator | 2026-04-04 00:51:57.952635 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-04 00:51:57.952640 | orchestrator | Saturday 04 April 2026 00:48:32 +0000 (0:00:00.267) 0:02:34.196 ******** 2026-04-04 00:51:57.952644 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.952649 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.952654 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.952659 | orchestrator | 2026-04-04 00:51:57.952664 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-04 00:51:57.952668 | orchestrator | Saturday 04 April 2026 00:48:32 +0000 (0:00:00.253) 0:02:34.450 ******** 2026-04-04 00:51:57.952673 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.952678 | orchestrator | 2026-04-04 00:51:57.952683 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-04 00:51:57.952688 | orchestrator | Saturday 04 April 2026 00:48:33 +0000 (0:00:00.946) 0:02:35.396 ******** 2026-04-04 00:51:57.952693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:51:57.952736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:51:57.952763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:51:57.952769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952833 | orchestrator | 2026-04-04 00:51:57.952838 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-04 00:51:57.952843 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:03.119) 0:02:38.516 ******** 2026-04-04 00:51:57.952848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:51:57.952854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952864 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.952907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:51:57.952921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952931 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.952940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:51:57.952952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:51:57.952964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:51:57.952972 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.952979 | orchestrator | 2026-04-04 00:51:57.953046 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-04 00:51:57.953058 | orchestrator | Saturday 04 April 2026 00:48:37 +0000 (0:00:00.575) 0:02:39.092 ******** 2026-04-04 00:51:57.953072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953089 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.953097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953113 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.953120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-04 00:51:57.953136 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.953144 | orchestrator | 2026-04-04 00:51:57.953152 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-04 00:51:57.953161 | orchestrator | Saturday 04 April 2026 00:48:38 +0000 (0:00:00.870) 0:02:39.962 ******** 2026-04-04 00:51:57.953169 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.953176 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.953181 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.953186 | orchestrator | 2026-04-04 00:51:57.953191 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-04 00:51:57.953196 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:01.107) 0:02:41.070 ******** 2026-04-04 00:51:57.953201 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.953206 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.953211 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.953216 | orchestrator | 2026-04-04 00:51:57.953221 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-04 00:51:57.953226 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:01.736) 0:02:42.806 ******** 2026-04-04 00:51:57.953231 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.953236 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.953240 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.953245 | orchestrator | 2026-04-04 00:51:57.953273 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-04 00:51:57.953279 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.269) 0:02:43.076 ******** 2026-04-04 00:51:57.953284 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.953289 | orchestrator | 2026-04-04 00:51:57.953293 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-04 00:51:57.953298 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:01.010) 0:02:44.087 ******** 2026-04-04 00:51:57.953310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 00:51:57.953369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 00:51:57.953383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 00:51:57.953398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953403 | orchestrator | 2026-04-04 00:51:57.953408 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-04 00:51:57.953413 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:03.591) 0:02:47.678 ******** 2026-04-04 00:51:57.953446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 00:51:57.953453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953458 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.953463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 00:51:57.953468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953484 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.953514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 00:51:57.953523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953529 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.953534 | orchestrator | 2026-04-04 00:51:57.953539 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-04 00:51:57.953544 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.757) 0:02:48.435 ******** 2026-04-04 00:51:57.953550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953560 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.953565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953575 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.953580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-04 00:51:57.953594 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.953599 | orchestrator | 2026-04-04 00:51:57.953603 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-04 00:51:57.953608 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.952) 0:02:49.387 ******** 2026-04-04 00:51:57.953613 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.953618 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.953623 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.953627 | orchestrator | 2026-04-04 00:51:57.953632 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-04 00:51:57.953637 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:01.290) 0:02:50.678 ******** 2026-04-04 00:51:57.953642 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.953646 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.953651 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.953656 | orchestrator | 2026-04-04 00:51:57.953661 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-04 00:51:57.953666 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:02.120) 0:02:52.799 ******** 2026-04-04 00:51:57.953670 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.953675 | orchestrator | 2026-04-04 00:51:57.953680 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-04 00:51:57.953685 | orchestrator | Saturday 04 April 2026 00:48:52 +0000 (0:00:00.920) 0:02:53.719 ******** 2026-04-04 00:51:57.953723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-04 00:51:57.953734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-04 00:51:57.953754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-04 00:51:57.953823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953839 | orchestrator | 2026-04-04 00:51:57.953844 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-04 00:51:57.953848 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:03.788) 0:02:57.508 ******** 2026-04-04 00:51:57.953886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-04 00:51:57.953896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953915 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.953920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-04 00:51:57.953925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-04 00:51:57.953977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953986 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.953991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.953997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.954002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.954007 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.954033 | orchestrator | 2026-04-04 00:51:57.954040 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-04 00:51:57.954045 | orchestrator | Saturday 04 April 2026 00:48:56 +0000 (0:00:00.663) 0:02:58.172 ******** 2026-04-04 00:51:57.954050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954063 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.954123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954148 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.954155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-04 00:51:57.954179 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.954187 | orchestrator | 2026-04-04 00:51:57.954194 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-04 00:51:57.954202 | orchestrator | Saturday 04 April 2026 00:48:57 +0000 (0:00:00.805) 0:02:58.977 ******** 2026-04-04 00:51:57.954210 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.954218 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.954226 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.954234 | orchestrator | 2026-04-04 00:51:57.954240 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-04 00:51:57.954245 | orchestrator | Saturday 04 April 2026 00:48:58 +0000 (0:00:01.208) 0:03:00.185 ******** 2026-04-04 00:51:57.954294 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.954303 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.954311 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.954318 | orchestrator | 2026-04-04 00:51:57.954325 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-04 00:51:57.954333 | orchestrator | Saturday 04 April 2026 00:49:00 +0000 (0:00:01.742) 0:03:01.928 ******** 2026-04-04 00:51:57.954340 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.954347 | orchestrator | 2026-04-04 00:51:57.954354 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-04 00:51:57.954361 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:01.138) 0:03:03.067 ******** 2026-04-04 00:51:57.954369 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:51:57.954377 | orchestrator | 2026-04-04 00:51:57.954384 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-04 00:51:57.954391 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:02.768) 0:03:05.835 ******** 2026-04-04 00:51:57.954400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954500 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.954514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954531 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.954604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954634 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.954639 | orchestrator | 2026-04-04 00:51:57.954645 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-04 00:51:57.954649 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:02.022) 0:03:07.858 ******** 2026-04-04 00:51:57.954655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954669 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.954729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954742 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.954748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:51:57.954796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-04 00:51:57.954803 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.954807 | orchestrator | 2026-04-04 00:51:57.954812 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-04 00:51:57.954817 | orchestrator | Saturday 04 April 2026 00:49:08 +0000 (0:00:02.518) 0:03:10.376 ******** 2026-04-04 00:51:57.954822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954832 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.954837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954852 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.954856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-04 00:51:57.954902 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.954906 | orchestrator | 2026-04-04 00:51:57.954915 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-04 00:51:57.954920 | orchestrator | Saturday 04 April 2026 00:49:11 +0000 (0:00:02.291) 0:03:12.667 ******** 2026-04-04 00:51:57.954929 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.954936 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.954946 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.954972 | orchestrator | 2026-04-04 00:51:57.954980 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-04 00:51:57.954997 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:02.006) 0:03:14.673 ******** 2026-04-04 00:51:57.955004 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955012 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955020 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955028 | orchestrator | 2026-04-04 00:51:57.955036 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-04 00:51:57.955044 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:01.664) 0:03:16.338 ******** 2026-04-04 00:51:57.955052 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955060 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955068 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955077 | orchestrator | 2026-04-04 00:51:57.955085 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-04 00:51:57.955094 | orchestrator | Saturday 04 April 2026 00:49:15 +0000 (0:00:00.290) 0:03:16.628 ******** 2026-04-04 00:51:57.955101 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.955106 | orchestrator | 2026-04-04 00:51:57.955110 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-04 00:51:57.955115 | orchestrator | Saturday 04 April 2026 00:49:16 +0000 (0:00:01.331) 0:03:17.960 ******** 2026-04-04 00:51:57.955121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:51:57.955136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:51:57.955144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-04 00:51:57.955152 | orchestrator | 2026-04-04 00:51:57.955162 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-04 00:51:57.955173 | orchestrator | Saturday 04 April 2026 00:49:17 +0000 (0:00:01.473) 0:03:19.433 ******** 2026-04-04 00:51:57.955276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:51:57.955292 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:51:57.955302 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-04 00:51:57.955319 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955323 | orchestrator | 2026-04-04 00:51:57.955328 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-04 00:51:57.955333 | orchestrator | Saturday 04 April 2026 00:49:18 +0000 (0:00:00.654) 0:03:20.088 ******** 2026-04-04 00:51:57.955338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:51:57.955343 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:51:57.955352 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-04 00:51:57.955362 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955366 | orchestrator | 2026-04-04 00:51:57.955371 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-04 00:51:57.955375 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.618) 0:03:20.707 ******** 2026-04-04 00:51:57.955380 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955384 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955389 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955393 | orchestrator | 2026-04-04 00:51:57.955398 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-04 00:51:57.955402 | orchestrator | Saturday 04 April 2026 00:49:19 +0000 (0:00:00.410) 0:03:21.117 ******** 2026-04-04 00:51:57.955407 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955411 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955416 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955421 | orchestrator | 2026-04-04 00:51:57.955425 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-04 00:51:57.955430 | orchestrator | Saturday 04 April 2026 00:49:20 +0000 (0:00:01.164) 0:03:22.281 ******** 2026-04-04 00:51:57.955434 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.955439 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.955443 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.955448 | orchestrator | 2026-04-04 00:51:57.955497 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-04 00:51:57.955504 | orchestrator | Saturday 04 April 2026 00:49:21 +0000 (0:00:00.260) 0:03:22.542 ******** 2026-04-04 00:51:57.955508 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.955513 | orchestrator | 2026-04-04 00:51:57.955521 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-04 00:51:57.955526 | orchestrator | Saturday 04 April 2026 00:49:22 +0000 (0:00:01.270) 0:03:23.813 ******** 2026-04-04 00:51:57.955531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 00:51:57.955542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 00:51:57.955547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.955627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.955727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 00:51:57.955742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.955825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.955889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.955899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.955960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.955971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.955976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.956095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.956153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956159 | orchestrator | 2026-04-04 00:51:57.956167 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-04 00:51:57.956172 | orchestrator | Saturday 04 April 2026 00:49:26 +0000 (0:00:03.845) 0:03:27.658 ******** 2026-04-04 00:51:57.956177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 00:51:57.956182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.956242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 00:51:57.956247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.956385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.956582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956594 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.956599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 00:51:57.956663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.956690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956766 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.956771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-04 00:51:57.956775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-04 00:51:57.956864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.956871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-04 00:51:57.956899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-04 00:51:57.956908 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.956915 | orchestrator | 2026-04-04 00:51:57.956926 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-04 00:51:57.956934 | orchestrator | Saturday 04 April 2026 00:49:28 +0000 (0:00:01.973) 0:03:29.632 ******** 2026-04-04 00:51:57.956941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956950 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.956954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956967 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.956971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-04 00:51:57.956982 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.956986 | orchestrator | 2026-04-04 00:51:57.956990 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-04 00:51:57.956995 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:01.456) 0:03:31.089 ******** 2026-04-04 00:51:57.956999 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957005 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957012 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957018 | orchestrator | 2026-04-04 00:51:57.957029 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-04 00:51:57.957038 | orchestrator | Saturday 04 April 2026 00:49:30 +0000 (0:00:01.292) 0:03:32.381 ******** 2026-04-04 00:51:57.957043 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957049 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957056 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957062 | orchestrator | 2026-04-04 00:51:57.957068 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-04 00:51:57.957075 | orchestrator | Saturday 04 April 2026 00:49:32 +0000 (0:00:02.081) 0:03:34.462 ******** 2026-04-04 00:51:57.957081 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.957086 | orchestrator | 2026-04-04 00:51:57.957092 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-04 00:51:57.957099 | orchestrator | Saturday 04 April 2026 00:49:34 +0000 (0:00:01.202) 0:03:35.665 ******** 2026-04-04 00:51:57.957105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957161 | orchestrator | 2026-04-04 00:51:57.957168 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-04 00:51:57.957174 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:02.843) 0:03:38.508 ******** 2026-04-04 00:51:57.957180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957187 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.957193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957200 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.957226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.957241 | orchestrator | 2026-04-04 00:51:57.957275 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-04 00:51:57.957285 | orchestrator | Saturday 04 April 2026 00:49:37 +0000 (0:00:00.446) 0:03:38.955 ******** 2026-04-04 00:51:57.957293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957314 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.957320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957333 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.957340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957353 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.957359 | orchestrator | 2026-04-04 00:51:57.957365 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-04 00:51:57.957372 | orchestrator | Saturday 04 April 2026 00:49:38 +0000 (0:00:01.030) 0:03:39.985 ******** 2026-04-04 00:51:57.957379 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957387 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957394 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957401 | orchestrator | 2026-04-04 00:51:57.957408 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-04 00:51:57.957413 | orchestrator | Saturday 04 April 2026 00:49:39 +0000 (0:00:01.312) 0:03:41.298 ******** 2026-04-04 00:51:57.957418 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957423 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957428 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957433 | orchestrator | 2026-04-04 00:51:57.957438 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-04 00:51:57.957442 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:01.895) 0:03:43.193 ******** 2026-04-04 00:51:57.957447 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.957452 | orchestrator | 2026-04-04 00:51:57.957457 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-04 00:51:57.957461 | orchestrator | Saturday 04 April 2026 00:49:42 +0000 (0:00:01.299) 0:03:44.493 ******** 2026-04-04 00:51:57.957468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.957577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957587 | orchestrator | 2026-04-04 00:51:57.957592 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-04 00:51:57.957597 | orchestrator | Saturday 04 April 2026 00:49:46 +0000 (0:00:03.727) 0:03:48.221 ******** 2026-04-04 00:51:57.957603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957654 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.957662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957684 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.957691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.957724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.957740 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.957746 | orchestrator | 2026-04-04 00:51:57.957751 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-04 00:51:57.957755 | orchestrator | Saturday 04 April 2026 00:49:47 +0000 (0:00:00.574) 0:03:48.795 ******** 2026-04-04 00:51:57.957760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957779 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.957783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957805 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.957809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-04 00:51:57.957922 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.957927 | orchestrator | 2026-04-04 00:51:57.957932 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-04 00:51:57.957936 | orchestrator | Saturday 04 April 2026 00:49:48 +0000 (0:00:00.825) 0:03:49.621 ******** 2026-04-04 00:51:57.957941 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957945 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957952 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957956 | orchestrator | 2026-04-04 00:51:57.957960 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-04 00:51:57.957964 | orchestrator | Saturday 04 April 2026 00:49:49 +0000 (0:00:01.663) 0:03:51.284 ******** 2026-04-04 00:51:57.957969 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.957973 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.957977 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.957981 | orchestrator | 2026-04-04 00:51:57.957985 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-04 00:51:57.957989 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:01.955) 0:03:53.240 ******** 2026-04-04 00:51:57.957993 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.957997 | orchestrator | 2026-04-04 00:51:57.958001 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-04 00:51:57.958005 | orchestrator | Saturday 04 April 2026 00:49:52 +0000 (0:00:01.140) 0:03:54.381 ******** 2026-04-04 00:51:57.958009 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-04 00:51:57.958036 | orchestrator | 2026-04-04 00:51:57.958041 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-04 00:51:57.958045 | orchestrator | Saturday 04 April 2026 00:49:53 +0000 (0:00:01.072) 0:03:55.454 ******** 2026-04-04 00:51:57.958050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:51:57.958055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:51:57.958063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-04 00:51:57.958067 | orchestrator | 2026-04-04 00:51:57.958082 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-04 00:51:57.958087 | orchestrator | Saturday 04 April 2026 00:49:57 +0000 (0:00:03.519) 0:03:58.974 ******** 2026-04-04 00:51:57.958091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958096 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958119 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958132 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958136 | orchestrator | 2026-04-04 00:51:57.958151 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-04 00:51:57.958156 | orchestrator | Saturday 04 April 2026 00:49:58 +0000 (0:00:01.093) 0:04:00.067 ******** 2026-04-04 00:51:57.958160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958169 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958187 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-04 00:51:57.958204 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958210 | orchestrator | 2026-04-04 00:51:57.958219 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:51:57.958228 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:01.547) 0:04:01.615 ******** 2026-04-04 00:51:57.958234 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.958240 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.958246 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.958271 | orchestrator | 2026-04-04 00:51:57.958277 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:51:57.958284 | orchestrator | Saturday 04 April 2026 00:50:02 +0000 (0:00:02.223) 0:04:03.839 ******** 2026-04-04 00:51:57.958290 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.958297 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.958304 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.958311 | orchestrator | 2026-04-04 00:51:57.958318 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-04 00:51:57.958324 | orchestrator | Saturday 04 April 2026 00:50:05 +0000 (0:00:02.699) 0:04:06.538 ******** 2026-04-04 00:51:57.958332 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-04 00:51:57.958339 | orchestrator | 2026-04-04 00:51:57.958346 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-04 00:51:57.958353 | orchestrator | Saturday 04 April 2026 00:50:05 +0000 (0:00:00.735) 0:04:07.274 ******** 2026-04-04 00:51:57.958361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958366 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958406 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958431 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958437 | orchestrator | 2026-04-04 00:51:57.958444 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-04 00:51:57.958450 | orchestrator | Saturday 04 April 2026 00:50:06 +0000 (0:00:01.064) 0:04:08.339 ******** 2026-04-04 00:51:57.958457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958464 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958477 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-04 00:51:57.958490 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958496 | orchestrator | 2026-04-04 00:51:57.958504 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-04 00:51:57.958510 | orchestrator | Saturday 04 April 2026 00:50:08 +0000 (0:00:01.280) 0:04:09.619 ******** 2026-04-04 00:51:57.958517 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958523 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958529 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958535 | orchestrator | 2026-04-04 00:51:57.958542 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:51:57.958548 | orchestrator | Saturday 04 April 2026 00:50:09 +0000 (0:00:01.121) 0:04:10.740 ******** 2026-04-04 00:51:57.958554 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.958574 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.958582 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.958588 | orchestrator | 2026-04-04 00:51:57.958595 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:51:57.958601 | orchestrator | Saturday 04 April 2026 00:50:11 +0000 (0:00:02.072) 0:04:12.813 ******** 2026-04-04 00:51:57.958608 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.958615 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.958622 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.958628 | orchestrator | 2026-04-04 00:51:57.958635 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-04 00:51:57.958642 | orchestrator | Saturday 04 April 2026 00:50:14 +0000 (0:00:03.064) 0:04:15.878 ******** 2026-04-04 00:51:57.958657 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-04 00:51:57.958664 | orchestrator | 2026-04-04 00:51:57.958702 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-04 00:51:57.958712 | orchestrator | Saturday 04 April 2026 00:50:15 +0000 (0:00:00.796) 0:04:16.674 ******** 2026-04-04 00:51:57.958721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958725 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958734 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958742 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958746 | orchestrator | 2026-04-04 00:51:57.958751 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-04 00:51:57.958755 | orchestrator | Saturday 04 April 2026 00:50:16 +0000 (0:00:01.378) 0:04:18.053 ******** 2026-04-04 00:51:57.958760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958764 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958772 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-04 00:51:57.958785 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958789 | orchestrator | 2026-04-04 00:51:57.958793 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-04 00:51:57.958797 | orchestrator | Saturday 04 April 2026 00:50:17 +0000 (0:00:01.274) 0:04:19.327 ******** 2026-04-04 00:51:57.958801 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.958805 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.958810 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.958814 | orchestrator | 2026-04-04 00:51:57.958818 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-04 00:51:57.958838 | orchestrator | Saturday 04 April 2026 00:50:19 +0000 (0:00:01.479) 0:04:20.807 ******** 2026-04-04 00:51:57.958843 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.958847 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.958851 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.958855 | orchestrator | 2026-04-04 00:51:57.958859 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-04 00:51:57.958866 | orchestrator | Saturday 04 April 2026 00:50:21 +0000 (0:00:02.676) 0:04:23.483 ******** 2026-04-04 00:51:57.958871 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.958875 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.958879 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.958883 | orchestrator | 2026-04-04 00:51:57.958887 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-04 00:51:57.958891 | orchestrator | Saturday 04 April 2026 00:50:24 +0000 (0:00:02.576) 0:04:26.059 ******** 2026-04-04 00:51:57.958895 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.958899 | orchestrator | 2026-04-04 00:51:57.958903 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-04 00:51:57.958907 | orchestrator | Saturday 04 April 2026 00:50:25 +0000 (0:00:01.201) 0:04:27.261 ******** 2026-04-04 00:51:57.958912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.958917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.958921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.958930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.958948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.958956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.958961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.958965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.958970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.958977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.958994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.959004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.959009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.959025 | orchestrator | 2026-04-04 00:51:57.959030 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-04 00:51:57.959034 | orchestrator | Saturday 04 April 2026 00:50:29 +0000 (0:00:03.391) 0:04:30.652 ******** 2026-04-04 00:51:57.959038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.959043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.959063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.959077 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.959089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.959093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.959122 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.959134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 00:51:57.959139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 00:51:57.959161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 00:51:57.959166 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959170 | orchestrator | 2026-04-04 00:51:57.959174 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-04 00:51:57.959182 | orchestrator | Saturday 04 April 2026 00:50:30 +0000 (0:00:01.033) 0:04:31.686 ******** 2026-04-04 00:51:57.959186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959195 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959208 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-04 00:51:57.959224 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959228 | orchestrator | 2026-04-04 00:51:57.959232 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-04 00:51:57.959236 | orchestrator | Saturday 04 April 2026 00:50:31 +0000 (0:00:00.920) 0:04:32.606 ******** 2026-04-04 00:51:57.959240 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.959244 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.959248 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.959294 | orchestrator | 2026-04-04 00:51:57.959299 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-04 00:51:57.959303 | orchestrator | Saturday 04 April 2026 00:50:32 +0000 (0:00:01.408) 0:04:34.014 ******** 2026-04-04 00:51:57.959307 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.959311 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.959315 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.959319 | orchestrator | 2026-04-04 00:51:57.959323 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-04 00:51:57.959327 | orchestrator | Saturday 04 April 2026 00:50:34 +0000 (0:00:02.175) 0:04:36.190 ******** 2026-04-04 00:51:57.959331 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.959336 | orchestrator | 2026-04-04 00:51:57.959340 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-04 00:51:57.959344 | orchestrator | Saturday 04 April 2026 00:50:36 +0000 (0:00:01.572) 0:04:37.762 ******** 2026-04-04 00:51:57.959349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:51:57.959369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:51:57.959378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:51:57.959388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:51:57.959393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:51:57.959411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:51:57.959416 | orchestrator | 2026-04-04 00:51:57.959420 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-04 00:51:57.959427 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:05.284) 0:04:43.046 ******** 2026-04-04 00:51:57.959435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:51:57.959440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:51:57.959444 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:51:57.959465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:51:57.959471 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:51:57.959485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:51:57.959490 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959494 | orchestrator | 2026-04-04 00:51:57.959498 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-04 00:51:57.959502 | orchestrator | Saturday 04 April 2026 00:50:42 +0000 (0:00:00.969) 0:04:44.015 ******** 2026-04-04 00:51:57.959506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-04 00:51:57.959511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959521 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-04 00:51:57.959529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959537 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-04 00:51:57.959549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-04 00:51:57.959575 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959580 | orchestrator | 2026-04-04 00:51:57.959584 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-04 00:51:57.959588 | orchestrator | Saturday 04 April 2026 00:50:43 +0000 (0:00:01.243) 0:04:45.259 ******** 2026-04-04 00:51:57.959592 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959596 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959600 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959604 | orchestrator | 2026-04-04 00:51:57.959609 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-04 00:51:57.959613 | orchestrator | Saturday 04 April 2026 00:50:44 +0000 (0:00:00.451) 0:04:45.711 ******** 2026-04-04 00:51:57.959617 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959621 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.959625 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.959629 | orchestrator | 2026-04-04 00:51:57.959633 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-04 00:51:57.959637 | orchestrator | Saturday 04 April 2026 00:50:45 +0000 (0:00:01.284) 0:04:46.996 ******** 2026-04-04 00:51:57.959642 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.959646 | orchestrator | 2026-04-04 00:51:57.959650 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-04 00:51:57.959654 | orchestrator | Saturday 04 April 2026 00:50:47 +0000 (0:00:01.612) 0:04:48.608 ******** 2026-04-04 00:51:57.959659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 00:51:57.959664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 00:51:57.959694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 00:51:57.959719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 00:51:57.959772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.959781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 00:51:57.959799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.959811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 00:51:57.959835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.959839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959854 | orchestrator | 2026-04-04 00:51:57.959858 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-04 00:51:57.959862 | orchestrator | Saturday 04 April 2026 00:50:51 +0000 (0:00:04.445) 0:04:53.054 ******** 2026-04-04 00:51:57.959870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-04 00:51:57.959874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-04 00:51:57.959899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.959906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959918 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.959922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-04 00:51:57.959929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-04 00:51:57.959949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 00:51:57.959957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-04 00:51:57.959966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.959979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.959991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.959997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-04 00:51:57.960001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.960007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-04 00:51:57.960011 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.960022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 00:51:57.960026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 00:51:57.960032 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960036 | orchestrator | 2026-04-04 00:51:57.960040 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-04 00:51:57.960044 | orchestrator | Saturday 04 April 2026 00:50:52 +0000 (0:00:00.727) 0:04:53.781 ******** 2026-04-04 00:51:57.960048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960068 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960084 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-04 00:51:57.960101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-04 00:51:57.960114 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960118 | orchestrator | 2026-04-04 00:51:57.960122 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-04 00:51:57.960126 | orchestrator | Saturday 04 April 2026 00:50:53 +0000 (0:00:01.074) 0:04:54.856 ******** 2026-04-04 00:51:57.960129 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960133 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960137 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960141 | orchestrator | 2026-04-04 00:51:57.960144 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-04 00:51:57.960151 | orchestrator | Saturday 04 April 2026 00:50:53 +0000 (0:00:00.393) 0:04:55.250 ******** 2026-04-04 00:51:57.960155 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960159 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960162 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960166 | orchestrator | 2026-04-04 00:51:57.960170 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-04 00:51:57.960174 | orchestrator | Saturday 04 April 2026 00:50:54 +0000 (0:00:01.069) 0:04:56.319 ******** 2026-04-04 00:51:57.960177 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.960181 | orchestrator | 2026-04-04 00:51:57.960185 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-04 00:51:57.960189 | orchestrator | Saturday 04 April 2026 00:50:56 +0000 (0:00:01.278) 0:04:57.598 ******** 2026-04-04 00:51:57.960193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:57.960197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:57.960206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-04 00:51:57.960214 | orchestrator | 2026-04-04 00:51:57.960218 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-04 00:51:57.960221 | orchestrator | Saturday 04 April 2026 00:50:58 +0000 (0:00:02.295) 0:04:59.893 ******** 2026-04-04 00:51:57.960225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:57.960230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:57.960234 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960237 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-04 00:51:57.960249 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960267 | orchestrator | 2026-04-04 00:51:57.960273 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-04 00:51:57.960277 | orchestrator | Saturday 04 April 2026 00:50:58 +0000 (0:00:00.351) 0:05:00.245 ******** 2026-04-04 00:51:57.960284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:51:57.960288 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:51:57.960295 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-04 00:51:57.960303 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960307 | orchestrator | 2026-04-04 00:51:57.960310 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-04 00:51:57.960314 | orchestrator | Saturday 04 April 2026 00:50:59 +0000 (0:00:00.550) 0:05:00.795 ******** 2026-04-04 00:51:57.960318 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960322 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960325 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960329 | orchestrator | 2026-04-04 00:51:57.960333 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-04 00:51:57.960337 | orchestrator | Saturday 04 April 2026 00:50:59 +0000 (0:00:00.657) 0:05:01.452 ******** 2026-04-04 00:51:57.960340 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960344 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960348 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960352 | orchestrator | 2026-04-04 00:51:57.960355 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-04 00:51:57.960359 | orchestrator | Saturday 04 April 2026 00:51:01 +0000 (0:00:01.135) 0:05:02.588 ******** 2026-04-04 00:51:57.960363 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:51:57.960367 | orchestrator | 2026-04-04 00:51:57.960370 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-04 00:51:57.960374 | orchestrator | Saturday 04 April 2026 00:51:02 +0000 (0:00:01.309) 0:05:03.897 ******** 2026-04-04 00:51:57.960378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-04 00:51:57.960415 | orchestrator | 2026-04-04 00:51:57.960419 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-04 00:51:57.960422 | orchestrator | Saturday 04 April 2026 00:51:07 +0000 (0:00:05.517) 0:05:09.415 ******** 2026-04-04 00:51:57.960428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960439 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960455 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-04 00:51:57.960471 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960475 | orchestrator | 2026-04-04 00:51:57.960479 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-04 00:51:57.960483 | orchestrator | Saturday 04 April 2026 00:51:08 +0000 (0:00:00.799) 0:05:10.215 ******** 2026-04-04 00:51:57.960486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960502 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960524 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-04 00:51:57.960543 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960547 | orchestrator | 2026-04-04 00:51:57.960551 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-04 00:51:57.960554 | orchestrator | Saturday 04 April 2026 00:51:09 +0000 (0:00:00.845) 0:05:11.060 ******** 2026-04-04 00:51:57.960558 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.960562 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.960566 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.960571 | orchestrator | 2026-04-04 00:51:57.960580 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-04 00:51:57.960589 | orchestrator | Saturday 04 April 2026 00:51:10 +0000 (0:00:01.253) 0:05:12.313 ******** 2026-04-04 00:51:57.960596 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.960605 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.960611 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.960617 | orchestrator | 2026-04-04 00:51:57.960623 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-04 00:51:57.960629 | orchestrator | Saturday 04 April 2026 00:51:12 +0000 (0:00:01.998) 0:05:14.312 ******** 2026-04-04 00:51:57.960636 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960641 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960647 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960653 | orchestrator | 2026-04-04 00:51:57.960659 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-04 00:51:57.960665 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:00.469) 0:05:14.782 ******** 2026-04-04 00:51:57.960672 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960678 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960684 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960690 | orchestrator | 2026-04-04 00:51:57.960696 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-04 00:51:57.960703 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:00.268) 0:05:15.050 ******** 2026-04-04 00:51:57.960710 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960716 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960722 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960729 | orchestrator | 2026-04-04 00:51:57.960735 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-04 00:51:57.960742 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:00.274) 0:05:15.325 ******** 2026-04-04 00:51:57.960754 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960761 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960767 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960775 | orchestrator | 2026-04-04 00:51:57.960782 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-04 00:51:57.960789 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.250) 0:05:15.575 ******** 2026-04-04 00:51:57.960796 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960802 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960814 | orchestrator | 2026-04-04 00:51:57.960818 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-04 00:51:57.960822 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.456) 0:05:16.031 ******** 2026-04-04 00:51:57.960825 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.960829 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.960833 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.960837 | orchestrator | 2026-04-04 00:51:57.960840 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-04 00:51:57.960844 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.474) 0:05:16.505 ******** 2026-04-04 00:51:57.960848 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.960852 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.960856 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.960860 | orchestrator | 2026-04-04 00:51:57.960864 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-04 00:51:57.960867 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.632) 0:05:17.138 ******** 2026-04-04 00:51:57.960871 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.960875 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.960879 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.960882 | orchestrator | 2026-04-04 00:51:57.960886 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-04 00:51:57.960890 | orchestrator | Saturday 04 April 2026 00:51:16 +0000 (0:00:00.503) 0:05:17.641 ******** 2026-04-04 00:51:57.960893 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.960897 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.960901 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.960905 | orchestrator | 2026-04-04 00:51:57.960908 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-04 00:51:57.960912 | orchestrator | Saturday 04 April 2026 00:51:16 +0000 (0:00:00.819) 0:05:18.460 ******** 2026-04-04 00:51:57.960916 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.960920 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.960923 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.960927 | orchestrator | 2026-04-04 00:51:57.960931 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-04 00:51:57.960934 | orchestrator | Saturday 04 April 2026 00:51:17 +0000 (0:00:00.877) 0:05:19.338 ******** 2026-04-04 00:51:57.960938 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.960942 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.960945 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.960949 | orchestrator | 2026-04-04 00:51:57.961001 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-04 00:51:57.961016 | orchestrator | Saturday 04 April 2026 00:51:18 +0000 (0:00:00.854) 0:05:20.193 ******** 2026-04-04 00:51:57.961020 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.961024 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.961028 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.961031 | orchestrator | 2026-04-04 00:51:57.961035 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-04 00:51:57.961039 | orchestrator | Saturday 04 April 2026 00:51:28 +0000 (0:00:09.522) 0:05:29.715 ******** 2026-04-04 00:51:57.961043 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.961050 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.961053 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.961057 | orchestrator | 2026-04-04 00:51:57.961061 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-04 00:51:57.961064 | orchestrator | Saturday 04 April 2026 00:51:29 +0000 (0:00:00.961) 0:05:30.676 ******** 2026-04-04 00:51:57.961068 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.961072 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.961076 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.961079 | orchestrator | 2026-04-04 00:51:57.961083 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-04 00:51:57.961091 | orchestrator | Saturday 04 April 2026 00:51:42 +0000 (0:00:13.327) 0:05:44.004 ******** 2026-04-04 00:51:57.961095 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.961099 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.961103 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.961106 | orchestrator | 2026-04-04 00:51:57.961110 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-04 00:51:57.961117 | orchestrator | Saturday 04 April 2026 00:51:43 +0000 (0:00:00.774) 0:05:44.778 ******** 2026-04-04 00:51:57.961121 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:51:57.961124 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:51:57.961128 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:51:57.961132 | orchestrator | 2026-04-04 00:51:57.961136 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-04 00:51:57.961140 | orchestrator | Saturday 04 April 2026 00:51:52 +0000 (0:00:09.301) 0:05:54.080 ******** 2026-04-04 00:51:57.961143 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961147 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961151 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961154 | orchestrator | 2026-04-04 00:51:57.961158 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-04 00:51:57.961162 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.536) 0:05:54.616 ******** 2026-04-04 00:51:57.961165 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961169 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961173 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961176 | orchestrator | 2026-04-04 00:51:57.961180 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-04 00:51:57.961184 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.305) 0:05:54.922 ******** 2026-04-04 00:51:57.961188 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961191 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961195 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961198 | orchestrator | 2026-04-04 00:51:57.961202 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-04 00:51:57.961206 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.302) 0:05:55.225 ******** 2026-04-04 00:51:57.961209 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961213 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961217 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961221 | orchestrator | 2026-04-04 00:51:57.961224 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-04 00:51:57.961228 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.305) 0:05:55.531 ******** 2026-04-04 00:51:57.961232 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961235 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961239 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961243 | orchestrator | 2026-04-04 00:51:57.961247 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-04 00:51:57.961269 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.525) 0:05:56.057 ******** 2026-04-04 00:51:57.961277 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:51:57.961281 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:51:57.961288 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:51:57.961292 | orchestrator | 2026-04-04 00:51:57.961296 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-04 00:51:57.961300 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.281) 0:05:56.338 ******** 2026-04-04 00:51:57.961303 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.961307 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.961311 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.961315 | orchestrator | 2026-04-04 00:51:57.961318 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-04 00:51:57.961322 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.822) 0:05:57.160 ******** 2026-04-04 00:51:57.961326 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:51:57.961330 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:51:57.961333 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:51:57.961337 | orchestrator | 2026-04-04 00:51:57.961341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:51:57.961345 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-04 00:51:57.961349 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-04 00:51:57.961353 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-04 00:51:57.961357 | orchestrator | 2026-04-04 00:51:57.961360 | orchestrator | 2026-04-04 00:51:57.961364 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:51:57.961368 | orchestrator | Saturday 04 April 2026 00:51:56 +0000 (0:00:00.762) 0:05:57.922 ******** 2026-04-04 00:51:57.961372 | orchestrator | =============================================================================== 2026-04-04 00:51:57.961375 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.33s 2026-04-04 00:51:57.961379 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.52s 2026-04-04 00:51:57.961383 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.30s 2026-04-04 00:51:57.961387 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 6.37s 2026-04-04 00:51:57.961391 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.52s 2026-04-04 00:51:57.961394 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 5.41s 2026-04-04 00:51:57.961398 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2026-04-04 00:51:57.961402 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.45s 2026-04-04 00:51:57.961408 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.24s 2026-04-04 00:51:57.961411 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.21s 2026-04-04 00:51:57.961415 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.11s 2026-04-04 00:51:57.961422 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.85s 2026-04-04 00:51:57.961425 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.81s 2026-04-04 00:51:57.961429 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.80s 2026-04-04 00:51:57.961433 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.79s 2026-04-04 00:51:57.961437 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.77s 2026-04-04 00:51:57.961440 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.73s 2026-04-04 00:51:57.961444 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.70s 2026-04-04 00:51:57.961448 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.61s 2026-04-04 00:51:57.961457 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.59s 2026-04-04 00:51:57.961461 | orchestrator | 2026-04-04 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:00.980229 | orchestrator | 2026-04-04 00:52:00 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:00.980845 | orchestrator | 2026-04-04 00:52:00 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:00.984710 | orchestrator | 2026-04-04 00:52:00 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:00.984804 | orchestrator | 2026-04-04 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:04.021419 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:04.021522 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:04.021533 | orchestrator | 2026-04-04 00:52:04 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:04.021541 | orchestrator | 2026-04-04 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:07.050210 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:07.052366 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:07.054570 | orchestrator | 2026-04-04 00:52:07 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:07.054618 | orchestrator | 2026-04-04 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:10.085855 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:10.086344 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:10.087141 | orchestrator | 2026-04-04 00:52:10 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:10.087321 | orchestrator | 2026-04-04 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:13.121018 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:13.123003 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:13.125121 | orchestrator | 2026-04-04 00:52:13 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:13.125402 | orchestrator | 2026-04-04 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:16.183053 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:16.185478 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:16.188128 | orchestrator | 2026-04-04 00:52:16 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:16.188189 | orchestrator | 2026-04-04 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:19.217892 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:19.217978 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:19.217987 | orchestrator | 2026-04-04 00:52:19 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:19.218012 | orchestrator | 2026-04-04 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:22.287731 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:22.287861 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:22.288375 | orchestrator | 2026-04-04 00:52:22 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:22.288394 | orchestrator | 2026-04-04 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:25.326100 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:25.327080 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:25.328057 | orchestrator | 2026-04-04 00:52:25 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:25.328215 | orchestrator | 2026-04-04 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:28.377553 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:28.378124 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:28.379578 | orchestrator | 2026-04-04 00:52:28 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:28.379637 | orchestrator | 2026-04-04 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:31.408693 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:31.411252 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:31.411877 | orchestrator | 2026-04-04 00:52:31 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:31.411896 | orchestrator | 2026-04-04 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:34.450494 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:34.451823 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:34.455745 | orchestrator | 2026-04-04 00:52:34 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:34.455794 | orchestrator | 2026-04-04 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:37.499214 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:37.500774 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:37.502854 | orchestrator | 2026-04-04 00:52:37 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:37.502889 | orchestrator | 2026-04-04 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:40.548087 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:40.549577 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:40.551271 | orchestrator | 2026-04-04 00:52:40 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:40.551331 | orchestrator | 2026-04-04 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:43.586215 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:43.587267 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:43.589848 | orchestrator | 2026-04-04 00:52:43 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:43.589901 | orchestrator | 2026-04-04 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:46.628279 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:46.629039 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:46.630141 | orchestrator | 2026-04-04 00:52:46 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:46.630174 | orchestrator | 2026-04-04 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:49.676115 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:49.677975 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:49.679838 | orchestrator | 2026-04-04 00:52:49 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:49.679893 | orchestrator | 2026-04-04 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:52.728774 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:52.729956 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:52.733212 | orchestrator | 2026-04-04 00:52:52 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:52.733266 | orchestrator | 2026-04-04 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:55.775869 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:55.775958 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:55.775971 | orchestrator | 2026-04-04 00:52:55 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:55.775979 | orchestrator | 2026-04-04 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:52:58.816814 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:52:58.818730 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:52:58.820337 | orchestrator | 2026-04-04 00:52:58 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:52:58.820590 | orchestrator | 2026-04-04 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:01.866807 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:01.868064 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:01.869585 | orchestrator | 2026-04-04 00:53:01 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:01.869627 | orchestrator | 2026-04-04 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:04.919280 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:04.920624 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:04.922316 | orchestrator | 2026-04-04 00:53:04 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:04.922572 | orchestrator | 2026-04-04 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:07.963964 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:07.965594 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:07.966648 | orchestrator | 2026-04-04 00:53:07 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:07.966694 | orchestrator | 2026-04-04 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:11.014774 | orchestrator | 2026-04-04 00:53:11 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:11.016871 | orchestrator | 2026-04-04 00:53:11 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:11.018617 | orchestrator | 2026-04-04 00:53:11 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:11.018905 | orchestrator | 2026-04-04 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:14.056831 | orchestrator | 2026-04-04 00:53:14 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:14.058658 | orchestrator | 2026-04-04 00:53:14 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:14.061600 | orchestrator | 2026-04-04 00:53:14 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:14.061672 | orchestrator | 2026-04-04 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:17.098201 | orchestrator | 2026-04-04 00:53:17 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:17.098719 | orchestrator | 2026-04-04 00:53:17 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:17.099993 | orchestrator | 2026-04-04 00:53:17 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:17.100031 | orchestrator | 2026-04-04 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:20.148680 | orchestrator | 2026-04-04 00:53:20 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:20.153422 | orchestrator | 2026-04-04 00:53:20 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:20.156793 | orchestrator | 2026-04-04 00:53:20 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:20.156858 | orchestrator | 2026-04-04 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:23.207094 | orchestrator | 2026-04-04 00:53:23 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:23.209650 | orchestrator | 2026-04-04 00:53:23 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:23.211905 | orchestrator | 2026-04-04 00:53:23 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:23.211977 | orchestrator | 2026-04-04 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:26.259795 | orchestrator | 2026-04-04 00:53:26 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:26.260686 | orchestrator | 2026-04-04 00:53:26 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:26.264006 | orchestrator | 2026-04-04 00:53:26 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:26.264144 | orchestrator | 2026-04-04 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:29.314354 | orchestrator | 2026-04-04 00:53:29 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:29.315618 | orchestrator | 2026-04-04 00:53:29 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:29.317730 | orchestrator | 2026-04-04 00:53:29 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:29.318084 | orchestrator | 2026-04-04 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:32.362579 | orchestrator | 2026-04-04 00:53:32 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state STARTED 2026-04-04 00:53:32.365704 | orchestrator | 2026-04-04 00:53:32 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:32.367507 | orchestrator | 2026-04-04 00:53:32 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:32.367735 | orchestrator | 2026-04-04 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:35.419348 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task f5a4cb4e-bf2e-4771-a3c0-f86a43a27a34 is in state SUCCESS 2026-04-04 00:53:35.420881 | orchestrator | 2026-04-04 00:53:35.420924 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:53:35.420931 | orchestrator | 2.16.14 2026-04-04 00:53:35.420936 | orchestrator | 2026-04-04 00:53:35.420941 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-04 00:53:35.420946 | orchestrator | 2026-04-04 00:53:35.420950 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-04 00:53:35.420955 | orchestrator | Saturday 04 April 2026 00:43:44 +0000 (0:00:00.687) 0:00:00.687 ******** 2026-04-04 00:53:35.420960 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.420965 | orchestrator | 2026-04-04 00:53:35.420969 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-04 00:53:35.420974 | orchestrator | Saturday 04 April 2026 00:43:46 +0000 (0:00:01.231) 0:00:01.918 ******** 2026-04-04 00:53:35.420978 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.420983 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.420987 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.420991 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.420995 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421000 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421004 | orchestrator | 2026-04-04 00:53:35.421008 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-04 00:53:35.421013 | orchestrator | Saturday 04 April 2026 00:43:47 +0000 (0:00:01.724) 0:00:03.643 ******** 2026-04-04 00:53:35.421017 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421021 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421039 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421050 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421059 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421064 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421068 | orchestrator | 2026-04-04 00:53:35.421073 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-04 00:53:35.421086 | orchestrator | Saturday 04 April 2026 00:43:48 +0000 (0:00:00.737) 0:00:04.380 ******** 2026-04-04 00:53:35.421090 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421095 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421109 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421113 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421118 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421122 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421126 | orchestrator | 2026-04-04 00:53:35.421131 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-04 00:53:35.421135 | orchestrator | Saturday 04 April 2026 00:43:49 +0000 (0:00:01.234) 0:00:05.614 ******** 2026-04-04 00:53:35.421139 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421144 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421148 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421152 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421157 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421161 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421165 | orchestrator | 2026-04-04 00:53:35.421170 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-04 00:53:35.421174 | orchestrator | Saturday 04 April 2026 00:43:50 +0000 (0:00:00.980) 0:00:06.595 ******** 2026-04-04 00:53:35.421240 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421247 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421251 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421258 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421266 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421274 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421286 | orchestrator | 2026-04-04 00:53:35.421294 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-04 00:53:35.421302 | orchestrator | Saturday 04 April 2026 00:43:51 +0000 (0:00:00.738) 0:00:07.333 ******** 2026-04-04 00:53:35.421475 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421485 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421489 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421493 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421498 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421502 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421506 | orchestrator | 2026-04-04 00:53:35.421511 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-04 00:53:35.421515 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:01.460) 0:00:08.794 ******** 2026-04-04 00:53:35.421520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.421525 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.421529 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.421533 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.421538 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.421542 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.421546 | orchestrator | 2026-04-04 00:53:35.421551 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-04 00:53:35.421555 | orchestrator | Saturday 04 April 2026 00:43:53 +0000 (0:00:00.659) 0:00:09.453 ******** 2026-04-04 00:53:35.421560 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421564 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421568 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421573 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421577 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421581 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421585 | orchestrator | 2026-04-04 00:53:35.421590 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-04 00:53:35.421594 | orchestrator | Saturday 04 April 2026 00:43:54 +0000 (0:00:01.039) 0:00:10.493 ******** 2026-04-04 00:53:35.421598 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:53:35.421603 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.421608 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.421612 | orchestrator | 2026-04-04 00:53:35.421616 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-04 00:53:35.421640 | orchestrator | Saturday 04 April 2026 00:43:55 +0000 (0:00:00.782) 0:00:11.276 ******** 2026-04-04 00:53:35.421645 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.421649 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.421653 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.421666 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.421670 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.421675 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.421679 | orchestrator | 2026-04-04 00:53:35.421683 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-04 00:53:35.421688 | orchestrator | Saturday 04 April 2026 00:43:56 +0000 (0:00:01.205) 0:00:12.481 ******** 2026-04-04 00:53:35.421692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:53:35.421697 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.421701 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.421705 | orchestrator | 2026-04-04 00:53:35.421710 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-04 00:53:35.421714 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:02.645) 0:00:15.127 ******** 2026-04-04 00:53:35.421719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:53:35.421723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:53:35.421728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:53:35.421732 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.421736 | orchestrator | 2026-04-04 00:53:35.421741 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-04 00:53:35.421745 | orchestrator | Saturday 04 April 2026 00:43:59 +0000 (0:00:00.444) 0:00:15.572 ******** 2026-04-04 00:53:35.421750 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.421760 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422125 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422130 | orchestrator | 2026-04-04 00:53:35.422135 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-04 00:53:35.422139 | orchestrator | Saturday 04 April 2026 00:44:00 +0000 (0:00:00.911) 0:00:16.483 ******** 2026-04-04 00:53:35.422145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422151 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422167 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422171 | orchestrator | 2026-04-04 00:53:35.422175 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-04 00:53:35.422180 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:00.283) 0:00:16.767 ******** 2026-04-04 00:53:35.422218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-04 00:43:57.486842', 'end': '2026-04-04 00:43:57.577303', 'delta': '0:00:00.090461', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-04 00:43:58.048845', 'end': '2026-04-04 00:43:58.147149', 'delta': '0:00:00.098304', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422234 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-04 00:43:59.100189', 'end': '2026-04-04 00:43:59.217933', 'delta': '0:00:00.117744', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.422239 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422243 | orchestrator | 2026-04-04 00:53:35.422255 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-04 00:53:35.422263 | orchestrator | Saturday 04 April 2026 00:44:01 +0000 (0:00:00.550) 0:00:17.318 ******** 2026-04-04 00:53:35.422270 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.422400 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.422410 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.422414 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.422418 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.422423 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.422427 | orchestrator | 2026-04-04 00:53:35.422454 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-04 00:53:35.422460 | orchestrator | Saturday 04 April 2026 00:44:03 +0000 (0:00:01.949) 0:00:19.268 ******** 2026-04-04 00:53:35.422464 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.422469 | orchestrator | 2026-04-04 00:53:35.422473 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-04 00:53:35.422483 | orchestrator | Saturday 04 April 2026 00:44:04 +0000 (0:00:00.756) 0:00:20.024 ******** 2026-04-04 00:53:35.422487 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422492 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.422496 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.422501 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.422505 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.422509 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.422514 | orchestrator | 2026-04-04 00:53:35.422519 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-04 00:53:35.422528 | orchestrator | Saturday 04 April 2026 00:44:05 +0000 (0:00:01.469) 0:00:21.494 ******** 2026-04-04 00:53:35.422539 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422546 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.422554 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.422561 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.422568 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.422575 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.422590 | orchestrator | 2026-04-04 00:53:35.422597 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:53:35.422604 | orchestrator | Saturday 04 April 2026 00:44:06 +0000 (0:00:01.028) 0:00:22.522 ******** 2026-04-04 00:53:35.422611 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422618 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.422625 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.422632 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.422860 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.422868 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.422872 | orchestrator | 2026-04-04 00:53:35.422877 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-04 00:53:35.422882 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:00.775) 0:00:23.297 ******** 2026-04-04 00:53:35.422886 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422890 | orchestrator | 2026-04-04 00:53:35.422895 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-04 00:53:35.422899 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:00.087) 0:00:23.385 ******** 2026-04-04 00:53:35.422904 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422908 | orchestrator | 2026-04-04 00:53:35.422913 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:53:35.422917 | orchestrator | Saturday 04 April 2026 00:44:07 +0000 (0:00:00.245) 0:00:23.630 ******** 2026-04-04 00:53:35.422922 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.422926 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.422930 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.422966 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.422973 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.422978 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.422982 | orchestrator | 2026-04-04 00:53:35.422987 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-04 00:53:35.422991 | orchestrator | Saturday 04 April 2026 00:44:08 +0000 (0:00:00.713) 0:00:24.344 ******** 2026-04-04 00:53:35.422996 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423000 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423005 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423010 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.423016 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.423061 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.423068 | orchestrator | 2026-04-04 00:53:35.423073 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-04 00:53:35.423077 | orchestrator | Saturday 04 April 2026 00:44:09 +0000 (0:00:00.899) 0:00:25.244 ******** 2026-04-04 00:53:35.423082 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423095 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423741 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423765 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.423772 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.423779 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.423785 | orchestrator | 2026-04-04 00:53:35.423793 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-04 00:53:35.423800 | orchestrator | Saturday 04 April 2026 00:44:10 +0000 (0:00:00.947) 0:00:26.191 ******** 2026-04-04 00:53:35.423807 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423812 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423816 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423820 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.423824 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.423829 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.423836 | orchestrator | 2026-04-04 00:53:35.423843 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-04 00:53:35.423854 | orchestrator | Saturday 04 April 2026 00:44:11 +0000 (0:00:00.821) 0:00:27.012 ******** 2026-04-04 00:53:35.423861 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423868 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423875 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423882 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.423889 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.423896 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.423901 | orchestrator | 2026-04-04 00:53:35.423905 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-04 00:53:35.423911 | orchestrator | Saturday 04 April 2026 00:44:12 +0000 (0:00:01.132) 0:00:28.144 ******** 2026-04-04 00:53:35.423918 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423925 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423931 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423937 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.423944 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.423951 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.423959 | orchestrator | 2026-04-04 00:53:35.423965 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-04 00:53:35.423972 | orchestrator | Saturday 04 April 2026 00:44:13 +0000 (0:00:00.613) 0:00:28.757 ******** 2026-04-04 00:53:35.423979 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.423986 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.423992 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.423999 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.424005 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.424011 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.424017 | orchestrator | 2026-04-04 00:53:35.424081 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-04 00:53:35.424087 | orchestrator | Saturday 04 April 2026 00:44:13 +0000 (0:00:00.507) 0:00:29.265 ******** 2026-04-04 00:53:35.424093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519', 'dm-uuid-LVM-M9GI4tNPMhIL9E0kFjOEeN17N1f5LxVN4O5GSm4RLJBoiT8R2ghPV5w3wf3nWemL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906', 'dm-uuid-LVM-r0lB9UuGpQCf3kMFs8zvHlZuRtH2PKlnpVETyxuv7nEpJmzX6s3HbLpsn28uK4Tg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ACeHCA-O1ys-44K7-0m3K-pzzu-98Hz-IMyawd', 'scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8', 'scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcJoM8-3FHZ-1ME2-NVPJ-2WCZ-VPLE-T2V5u3', 'scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737', 'scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9', 'scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f', 'dm-uuid-LVM-HT7voBypEw31a9Cjr4Fa1wcBJgYUr5EfbXh6BXfE3G6hwaB7cC5YacX2YO8ZHwbY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa', 'dm-uuid-LVM-qMvt7xqAxXG2O8BdCvvt7q9bmWDLB7rZXdoxIa8uZ0hlbHFQg22690Xpbwin8xpu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424398 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.424408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424425 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdOOIN-sqdQ-Yzbu-z9Ck-YhV9-4eU3-Q05miU', 'scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55', 'scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z5si0g-bXnY-Uer7-JCzi-gXmG-Q6Ma-iD3UG0', 'scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159', 'scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8', 'scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159', 'dm-uuid-LVM-6PCLJiqtncSsW11ER2Vse6KNZiossrrndGP1WdFeKSiTlqeTvJKRFEmvnrMRuJtR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7', 'dm-uuid-LVM-tV9ZTDPHn1Gk7L263V8luxEzWE16Jn61SmQpaaQwl00FKWtcO1GG0ZAv69UTxQW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424698 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.424706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part1', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part14', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part15', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part16', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424909 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.424914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rV2tHg-lSWp-N667-0UVN-DDUM-luRq-WRLITf', 'scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae', 'scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.424966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jAoZd6-7gHp-96M7-Ytyk-lMu0-4WAT-KhB2fY', 'scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905', 'scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.424970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5', 'scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425062 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.425068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425158 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.425165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:53:35.425259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:53:35.425295 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.425299 | orchestrator | 2026-04-04 00:53:35.425304 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-04 00:53:35.425309 | orchestrator | Saturday 04 April 2026 00:44:15 +0000 (0:00:01.767) 0:00:31.032 ******** 2026-04-04 00:53:35.425313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519', 'dm-uuid-LVM-M9GI4tNPMhIL9E0kFjOEeN17N1f5LxVN4O5GSm4RLJBoiT8R2ghPV5w3wf3nWemL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906', 'dm-uuid-LVM-r0lB9UuGpQCf3kMFs8zvHlZuRtH2PKlnpVETyxuv7nEpJmzX6s3HbLpsn28uK4Tg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f', 'dm-uuid-LVM-HT7voBypEw31a9Cjr4Fa1wcBJgYUr5EfbXh6BXfE3G6hwaB7cC5YacX2YO8ZHwbY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa', 'dm-uuid-LVM-qMvt7xqAxXG2O8BdCvvt7q9bmWDLB7rZXdoxIa8uZ0hlbHFQg22690Xpbwin8xpu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425438 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425464 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425508 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdOOIN-sqdQ-Yzbu-z9Ck-YhV9-4eU3-Q05miU', 'scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55', 'scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ACeHCA-O1ys-44K7-0m3K-pzzu-98Hz-IMyawd', 'scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8', 'scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z5si0g-bXnY-Uer7-JCzi-gXmG-Q6Ma-iD3UG0', 'scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159', 'scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8', 'scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425700 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcJoM8-3FHZ-1ME2-NVPJ-2WCZ-VPLE-T2V5u3', 'scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737', 'scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159', 'dm-uuid-LVM-6PCLJiqtncSsW11ER2Vse6KNZiossrrndGP1WdFeKSiTlqeTvJKRFEmvnrMRuJtR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9', 'scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425756 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7', 'dm-uuid-LVM-tV9ZTDPHn1Gk7L263V8luxEzWE16Jn61SmQpaaQwl00FKWtcO1GG0ZAv69UTxQW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425775 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.425781 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425894 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425936 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425949 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425989 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.425998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rV2tHg-lSWp-N667-0UVN-DDUM-luRq-WRLITf', 'scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae', 'scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jAoZd6-7gHp-96M7-Ytyk-lMu0-4WAT-KhB2fY', 'scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905', 'scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5', 'scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426124 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part14', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part15', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part16', 'scsi-SQEMU_QEMU_HARDDISK_e8724c57-8a81-4b1a-b62f-30f3282a03e2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426129 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426175 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426187 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426198 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426206 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426220 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426293 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426304 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part1', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part14', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part15', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part16', 'scsi-SQEMU_QEMU_HARDDISK_c54ed99d-8116-431b-a73a-2dbb6ef64fe0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426317 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426366 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.426376 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.426383 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.426390 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426397 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426405 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426421 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426443 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426452 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426492 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426499 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part1', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part14', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part15', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part16', 'scsi-SQEMU_QEMU_HARDDISK_3c579845-c8df-472e-b97f-01d742bc5a30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426518 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:53:35.426524 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.426531 | orchestrator | 2026-04-04 00:53:35.426576 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-04 00:53:35.426587 | orchestrator | Saturday 04 April 2026 00:44:17 +0000 (0:00:01.826) 0:00:32.859 ******** 2026-04-04 00:53:35.426594 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.426601 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.426609 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.426613 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.426617 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.426630 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.426634 | orchestrator | 2026-04-04 00:53:35.426639 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-04 00:53:35.426643 | orchestrator | Saturday 04 April 2026 00:44:18 +0000 (0:00:01.639) 0:00:34.498 ******** 2026-04-04 00:53:35.426647 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.426651 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.426655 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.426659 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.426662 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.426666 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.426670 | orchestrator | 2026-04-04 00:53:35.426674 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:53:35.426679 | orchestrator | Saturday 04 April 2026 00:44:20 +0000 (0:00:01.331) 0:00:35.830 ******** 2026-04-04 00:53:35.426683 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.426687 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.426690 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426694 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.426698 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.426702 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.426706 | orchestrator | 2026-04-04 00:53:35.426710 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:53:35.426714 | orchestrator | Saturday 04 April 2026 00:44:21 +0000 (0:00:01.225) 0:00:37.055 ******** 2026-04-04 00:53:35.426718 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426722 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.426726 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.426730 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.426737 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.426741 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.426744 | orchestrator | 2026-04-04 00:53:35.426748 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:53:35.426752 | orchestrator | Saturday 04 April 2026 00:44:22 +0000 (0:00:01.048) 0:00:38.104 ******** 2026-04-04 00:53:35.426756 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426767 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.426770 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.426774 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.426778 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.426782 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.426786 | orchestrator | 2026-04-04 00:53:35.426790 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:53:35.426795 | orchestrator | Saturday 04 April 2026 00:44:24 +0000 (0:00:01.851) 0:00:39.956 ******** 2026-04-04 00:53:35.426799 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426802 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.426806 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.426810 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.426814 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.426823 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.426827 | orchestrator | 2026-04-04 00:53:35.426831 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-04 00:53:35.426835 | orchestrator | Saturday 04 April 2026 00:44:25 +0000 (0:00:00.768) 0:00:40.724 ******** 2026-04-04 00:53:35.426840 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-04 00:53:35.426844 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-04 00:53:35.426848 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:53:35.426852 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-04 00:53:35.426856 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-04 00:53:35.426860 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-04 00:53:35.426864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-04 00:53:35.426867 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 00:53:35.426871 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-04 00:53:35.426875 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 00:53:35.426879 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-04 00:53:35.426883 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-04 00:53:35.426887 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-04 00:53:35.426891 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-04 00:53:35.426894 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-04 00:53:35.426898 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-04 00:53:35.426902 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-04 00:53:35.426906 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-04 00:53:35.426910 | orchestrator | 2026-04-04 00:53:35.426914 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-04 00:53:35.426918 | orchestrator | Saturday 04 April 2026 00:44:28 +0000 (0:00:03.869) 0:00:44.593 ******** 2026-04-04 00:53:35.426922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:53:35.426926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:53:35.426930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:53:35.426934 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:53:35.426938 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:53:35.426941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:53:35.426945 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.426949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:53:35.426953 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.426976 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:53:35.426981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:53:35.426985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:53:35.426992 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:53:35.426996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:53:35.427000 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427003 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-04 00:53:35.427007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-04 00:53:35.427011 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-04 00:53:35.427015 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427019 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427036 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-04 00:53:35.427041 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-04 00:53:35.427045 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-04 00:53:35.427049 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427053 | orchestrator | 2026-04-04 00:53:35.427057 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-04 00:53:35.427061 | orchestrator | Saturday 04 April 2026 00:44:29 +0000 (0:00:00.757) 0:00:45.351 ******** 2026-04-04 00:53:35.427065 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427069 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427073 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427077 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.427081 | orchestrator | 2026-04-04 00:53:35.427088 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:53:35.427093 | orchestrator | Saturday 04 April 2026 00:44:30 +0000 (0:00:01.030) 0:00:46.382 ******** 2026-04-04 00:53:35.427097 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427101 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427104 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427108 | orchestrator | 2026-04-04 00:53:35.427112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:53:35.427116 | orchestrator | Saturday 04 April 2026 00:44:31 +0000 (0:00:00.369) 0:00:46.751 ******** 2026-04-04 00:53:35.427120 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427124 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427128 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427132 | orchestrator | 2026-04-04 00:53:35.427136 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:53:35.427140 | orchestrator | Saturday 04 April 2026 00:44:31 +0000 (0:00:00.356) 0:00:47.108 ******** 2026-04-04 00:53:35.427144 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427148 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427151 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427155 | orchestrator | 2026-04-04 00:53:35.427159 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:53:35.427163 | orchestrator | Saturday 04 April 2026 00:44:31 +0000 (0:00:00.369) 0:00:47.478 ******** 2026-04-04 00:53:35.427167 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427171 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427175 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427179 | orchestrator | 2026-04-04 00:53:35.427183 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:53:35.427187 | orchestrator | Saturday 04 April 2026 00:44:32 +0000 (0:00:01.013) 0:00:48.491 ******** 2026-04-04 00:53:35.427190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.427194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.427198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.427202 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427206 | orchestrator | 2026-04-04 00:53:35.427215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:53:35.427220 | orchestrator | Saturday 04 April 2026 00:44:33 +0000 (0:00:00.392) 0:00:48.884 ******** 2026-04-04 00:53:35.427225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.427229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.427234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.427239 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427243 | orchestrator | 2026-04-04 00:53:35.427248 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:53:35.427252 | orchestrator | Saturday 04 April 2026 00:44:33 +0000 (0:00:00.350) 0:00:49.234 ******** 2026-04-04 00:53:35.427257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.427261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.427266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.427270 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427275 | orchestrator | 2026-04-04 00:53:35.427280 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:53:35.427284 | orchestrator | Saturday 04 April 2026 00:44:33 +0000 (0:00:00.410) 0:00:49.645 ******** 2026-04-04 00:53:35.427288 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427293 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427297 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427302 | orchestrator | 2026-04-04 00:53:35.427306 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:53:35.427311 | orchestrator | Saturday 04 April 2026 00:44:34 +0000 (0:00:00.439) 0:00:50.084 ******** 2026-04-04 00:53:35.427315 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:53:35.427320 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:53:35.427339 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:53:35.427344 | orchestrator | 2026-04-04 00:53:35.427349 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-04 00:53:35.427354 | orchestrator | Saturday 04 April 2026 00:44:35 +0000 (0:00:00.830) 0:00:50.914 ******** 2026-04-04 00:53:35.427359 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:53:35.427364 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.427369 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.427374 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:53:35.427379 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:53:35.427383 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:53:35.427387 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:53:35.427391 | orchestrator | 2026-04-04 00:53:35.427395 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-04 00:53:35.427399 | orchestrator | Saturday 04 April 2026 00:44:36 +0000 (0:00:01.136) 0:00:52.051 ******** 2026-04-04 00:53:35.427403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:53:35.427407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.427411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.427415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:53:35.427421 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:53:35.427425 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:53:35.427429 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:53:35.427437 | orchestrator | 2026-04-04 00:53:35.427441 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.427445 | orchestrator | Saturday 04 April 2026 00:44:38 +0000 (0:00:01.692) 0:00:53.743 ******** 2026-04-04 00:53:35.427449 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.427454 | orchestrator | 2026-04-04 00:53:35.427458 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.427462 | orchestrator | Saturday 04 April 2026 00:44:39 +0000 (0:00:01.364) 0:00:55.108 ******** 2026-04-04 00:53:35.427466 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.427470 | orchestrator | 2026-04-04 00:53:35.427474 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.427478 | orchestrator | Saturday 04 April 2026 00:44:40 +0000 (0:00:01.505) 0:00:56.613 ******** 2026-04-04 00:53:35.427482 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427486 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427490 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427494 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.427498 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.427501 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.427506 | orchestrator | 2026-04-04 00:53:35.427510 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.427514 | orchestrator | Saturday 04 April 2026 00:44:42 +0000 (0:00:01.253) 0:00:57.867 ******** 2026-04-04 00:53:35.427518 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427522 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427526 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427529 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427533 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427537 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427541 | orchestrator | 2026-04-04 00:53:35.427545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.427549 | orchestrator | Saturday 04 April 2026 00:44:43 +0000 (0:00:00.864) 0:00:58.732 ******** 2026-04-04 00:53:35.427553 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427557 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427561 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427565 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427569 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427573 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427577 | orchestrator | 2026-04-04 00:53:35.427581 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.427585 | orchestrator | Saturday 04 April 2026 00:44:43 +0000 (0:00:00.793) 0:00:59.526 ******** 2026-04-04 00:53:35.427589 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427593 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427597 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427601 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427605 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427609 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427613 | orchestrator | 2026-04-04 00:53:35.427617 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.427621 | orchestrator | Saturday 04 April 2026 00:44:44 +0000 (0:00:01.135) 0:01:00.662 ******** 2026-04-04 00:53:35.427625 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427629 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427633 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427636 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.427640 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.427659 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.427664 | orchestrator | 2026-04-04 00:53:35.427669 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.427673 | orchestrator | Saturday 04 April 2026 00:44:46 +0000 (0:00:01.505) 0:01:02.167 ******** 2026-04-04 00:53:35.427677 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427681 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427684 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427688 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427692 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427696 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427700 | orchestrator | 2026-04-04 00:53:35.427704 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.427708 | orchestrator | Saturday 04 April 2026 00:44:47 +0000 (0:00:01.393) 0:01:03.560 ******** 2026-04-04 00:53:35.427712 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427716 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427720 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427724 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427727 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427731 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427735 | orchestrator | 2026-04-04 00:53:35.427739 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.427743 | orchestrator | Saturday 04 April 2026 00:44:49 +0000 (0:00:01.619) 0:01:05.180 ******** 2026-04-04 00:53:35.427747 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427751 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427755 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427759 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.427763 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.427767 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.427771 | orchestrator | 2026-04-04 00:53:35.427775 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.427779 | orchestrator | Saturday 04 April 2026 00:44:50 +0000 (0:00:01.424) 0:01:06.604 ******** 2026-04-04 00:53:35.427783 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427789 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427793 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427797 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.427801 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.427804 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.427808 | orchestrator | 2026-04-04 00:53:35.427812 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.427816 | orchestrator | Saturday 04 April 2026 00:44:52 +0000 (0:00:01.373) 0:01:07.978 ******** 2026-04-04 00:53:35.427820 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427825 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427832 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427840 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427850 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427857 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427863 | orchestrator | 2026-04-04 00:53:35.427869 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.427876 | orchestrator | Saturday 04 April 2026 00:44:53 +0000 (0:00:00.894) 0:01:08.872 ******** 2026-04-04 00:53:35.427882 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.427888 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.427895 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.427901 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.427907 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.427913 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.427919 | orchestrator | 2026-04-04 00:53:35.427925 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.427932 | orchestrator | Saturday 04 April 2026 00:44:53 +0000 (0:00:00.669) 0:01:09.542 ******** 2026-04-04 00:53:35.427944 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.427951 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.427958 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.427965 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.427971 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.427978 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.427984 | orchestrator | 2026-04-04 00:53:35.427988 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.427992 | orchestrator | Saturday 04 April 2026 00:44:54 +0000 (0:00:00.944) 0:01:10.486 ******** 2026-04-04 00:53:35.427996 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.428000 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.428004 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.428014 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428018 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428074 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428094 | orchestrator | 2026-04-04 00:53:35.428099 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.428103 | orchestrator | Saturday 04 April 2026 00:44:55 +0000 (0:00:00.543) 0:01:11.030 ******** 2026-04-04 00:53:35.428107 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.428111 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.428115 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.428119 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428123 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428127 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428131 | orchestrator | 2026-04-04 00:53:35.428135 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.428139 | orchestrator | Saturday 04 April 2026 00:44:56 +0000 (0:00:00.696) 0:01:11.726 ******** 2026-04-04 00:53:35.428143 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428147 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428150 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428154 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428158 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428162 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428166 | orchestrator | 2026-04-04 00:53:35.428170 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.428174 | orchestrator | Saturday 04 April 2026 00:44:56 +0000 (0:00:00.525) 0:01:12.251 ******** 2026-04-04 00:53:35.428178 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428182 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428186 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428190 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428224 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428229 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428233 | orchestrator | 2026-04-04 00:53:35.428237 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.428241 | orchestrator | Saturday 04 April 2026 00:44:57 +0000 (0:00:00.667) 0:01:12.919 ******** 2026-04-04 00:53:35.428245 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428249 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428253 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428257 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.428261 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.428265 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.428269 | orchestrator | 2026-04-04 00:53:35.428273 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.428277 | orchestrator | Saturday 04 April 2026 00:44:57 +0000 (0:00:00.671) 0:01:13.591 ******** 2026-04-04 00:53:35.428281 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.428285 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.428289 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.428292 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.428301 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.428305 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.428309 | orchestrator | 2026-04-04 00:53:35.428313 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.428317 | orchestrator | Saturday 04 April 2026 00:44:58 +0000 (0:00:00.588) 0:01:14.179 ******** 2026-04-04 00:53:35.428321 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.428325 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.428329 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.428333 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.428337 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.428340 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.428344 | orchestrator | 2026-04-04 00:53:35.428348 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-04 00:53:35.428352 | orchestrator | Saturday 04 April 2026 00:44:59 +0000 (0:00:01.140) 0:01:15.319 ******** 2026-04-04 00:53:35.428356 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.428366 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.428370 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.428374 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.428378 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.428382 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.428386 | orchestrator | 2026-04-04 00:53:35.428390 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-04 00:53:35.428394 | orchestrator | Saturday 04 April 2026 00:45:01 +0000 (0:00:01.601) 0:01:16.921 ******** 2026-04-04 00:53:35.428398 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.428402 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.428405 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.428409 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.428413 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.428417 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.428421 | orchestrator | 2026-04-04 00:53:35.428425 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-04 00:53:35.428429 | orchestrator | Saturday 04 April 2026 00:45:03 +0000 (0:00:02.491) 0:01:19.412 ******** 2026-04-04 00:53:35.428433 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.428438 | orchestrator | 2026-04-04 00:53:35.428442 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-04 00:53:35.428446 | orchestrator | Saturday 04 April 2026 00:45:05 +0000 (0:00:01.337) 0:01:20.749 ******** 2026-04-04 00:53:35.428450 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428454 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428458 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428461 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428465 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428469 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428473 | orchestrator | 2026-04-04 00:53:35.428477 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-04 00:53:35.428480 | orchestrator | Saturday 04 April 2026 00:45:06 +0000 (0:00:01.096) 0:01:21.846 ******** 2026-04-04 00:53:35.428492 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428496 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428503 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428507 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428511 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428515 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428518 | orchestrator | 2026-04-04 00:53:35.428522 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-04 00:53:35.428526 | orchestrator | Saturday 04 April 2026 00:45:06 +0000 (0:00:00.832) 0:01:22.678 ******** 2026-04-04 00:53:35.428530 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428536 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428540 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428544 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428548 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428551 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-04 00:53:35.428555 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428559 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428563 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428567 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428585 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428589 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-04 00:53:35.428593 | orchestrator | 2026-04-04 00:53:35.428597 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-04 00:53:35.428601 | orchestrator | Saturday 04 April 2026 00:45:08 +0000 (0:00:01.272) 0:01:23.951 ******** 2026-04-04 00:53:35.428605 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.428609 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.428612 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.428616 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.428620 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.428623 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.428627 | orchestrator | 2026-04-04 00:53:35.428631 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-04 00:53:35.428635 | orchestrator | Saturday 04 April 2026 00:45:09 +0000 (0:00:01.142) 0:01:25.093 ******** 2026-04-04 00:53:35.428638 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428642 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428646 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428650 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428653 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428657 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428660 | orchestrator | 2026-04-04 00:53:35.428664 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-04 00:53:35.428668 | orchestrator | Saturday 04 April 2026 00:45:09 +0000 (0:00:00.567) 0:01:25.661 ******** 2026-04-04 00:53:35.428672 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428676 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428679 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428683 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428687 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428690 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428694 | orchestrator | 2026-04-04 00:53:35.428700 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-04 00:53:35.428704 | orchestrator | Saturday 04 April 2026 00:45:10 +0000 (0:00:00.875) 0:01:26.536 ******** 2026-04-04 00:53:35.428708 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428712 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428715 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428719 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428723 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428727 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428730 | orchestrator | 2026-04-04 00:53:35.428734 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-04 00:53:35.428741 | orchestrator | Saturday 04 April 2026 00:45:11 +0000 (0:00:00.721) 0:01:27.258 ******** 2026-04-04 00:53:35.428745 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.428749 | orchestrator | 2026-04-04 00:53:35.428752 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-04 00:53:35.428756 | orchestrator | Saturday 04 April 2026 00:45:12 +0000 (0:00:01.318) 0:01:28.576 ******** 2026-04-04 00:53:35.428760 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.428764 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.428767 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.428771 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.428775 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.428778 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.428782 | orchestrator | 2026-04-04 00:53:35.428786 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-04 00:53:35.428790 | orchestrator | Saturday 04 April 2026 00:46:12 +0000 (0:00:59.752) 0:02:28.329 ******** 2026-04-04 00:53:35.428794 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428798 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428801 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428805 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428809 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428812 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428816 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428820 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428824 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428827 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428831 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428835 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428839 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428842 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428846 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428850 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428853 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428857 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428861 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428865 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428881 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-04 00:53:35.428885 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-04 00:53:35.428889 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-04 00:53:35.428893 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428897 | orchestrator | 2026-04-04 00:53:35.428901 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-04 00:53:35.428904 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:00.602) 0:02:28.931 ******** 2026-04-04 00:53:35.428908 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428912 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428915 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428922 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428925 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428929 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428933 | orchestrator | 2026-04-04 00:53:35.428937 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-04 00:53:35.428940 | orchestrator | Saturday 04 April 2026 00:46:13 +0000 (0:00:00.676) 0:02:29.607 ******** 2026-04-04 00:53:35.428944 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428948 | orchestrator | 2026-04-04 00:53:35.428952 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-04 00:53:35.428956 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:00.115) 0:02:29.723 ******** 2026-04-04 00:53:35.428959 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.428963 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428967 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.428970 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.428974 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.428978 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.428981 | orchestrator | 2026-04-04 00:53:35.428985 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-04 00:53:35.428989 | orchestrator | Saturday 04 April 2026 00:46:14 +0000 (0:00:00.524) 0:02:30.248 ******** 2026-04-04 00:53:35.428995 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.428999 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429003 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429006 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429010 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429014 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429017 | orchestrator | 2026-04-04 00:53:35.429021 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-04 00:53:35.429040 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.783) 0:02:31.031 ******** 2026-04-04 00:53:35.429046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429052 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429057 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429063 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429069 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429075 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429081 | orchestrator | 2026-04-04 00:53:35.429087 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-04 00:53:35.429094 | orchestrator | Saturday 04 April 2026 00:46:15 +0000 (0:00:00.593) 0:02:31.624 ******** 2026-04-04 00:53:35.429100 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.429107 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.429112 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.429116 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.429119 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.429123 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.429127 | orchestrator | 2026-04-04 00:53:35.429130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-04 00:53:35.429134 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:02.219) 0:02:33.844 ******** 2026-04-04 00:53:35.429138 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.429142 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.429145 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.429149 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.429153 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.429157 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.429160 | orchestrator | 2026-04-04 00:53:35.429164 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-04 00:53:35.429168 | orchestrator | Saturday 04 April 2026 00:46:18 +0000 (0:00:00.511) 0:02:34.356 ******** 2026-04-04 00:53:35.429172 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.429181 | orchestrator | 2026-04-04 00:53:35.429185 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-04 00:53:35.429189 | orchestrator | Saturday 04 April 2026 00:46:19 +0000 (0:00:01.199) 0:02:35.555 ******** 2026-04-04 00:53:35.429192 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429196 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429200 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429204 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429207 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429211 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429215 | orchestrator | 2026-04-04 00:53:35.429219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-04 00:53:35.429222 | orchestrator | Saturday 04 April 2026 00:46:20 +0000 (0:00:00.593) 0:02:36.148 ******** 2026-04-04 00:53:35.429226 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429230 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429234 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429237 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429241 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429245 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429248 | orchestrator | 2026-04-04 00:53:35.429252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-04 00:53:35.429256 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:00.643) 0:02:36.791 ******** 2026-04-04 00:53:35.429260 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429264 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429282 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429286 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429290 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429294 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429298 | orchestrator | 2026-04-04 00:53:35.429301 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-04 00:53:35.429305 | orchestrator | Saturday 04 April 2026 00:46:21 +0000 (0:00:00.574) 0:02:37.366 ******** 2026-04-04 00:53:35.429309 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429313 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429316 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429320 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429324 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429328 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429331 | orchestrator | 2026-04-04 00:53:35.429335 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-04 00:53:35.429339 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.664) 0:02:38.031 ******** 2026-04-04 00:53:35.429343 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429346 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429350 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429354 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429357 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429361 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429365 | orchestrator | 2026-04-04 00:53:35.429369 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-04 00:53:35.429372 | orchestrator | Saturday 04 April 2026 00:46:22 +0000 (0:00:00.591) 0:02:38.622 ******** 2026-04-04 00:53:35.429376 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429380 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429384 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429387 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429391 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429395 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429398 | orchestrator | 2026-04-04 00:53:35.429405 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-04 00:53:35.429411 | orchestrator | Saturday 04 April 2026 00:46:23 +0000 (0:00:00.790) 0:02:39.412 ******** 2026-04-04 00:53:35.429415 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429419 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429423 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429426 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429430 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429434 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429437 | orchestrator | 2026-04-04 00:53:35.429441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-04 00:53:35.429445 | orchestrator | Saturday 04 April 2026 00:46:24 +0000 (0:00:00.621) 0:02:40.034 ******** 2026-04-04 00:53:35.429448 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.429452 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.429456 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.429460 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429463 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429467 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429471 | orchestrator | 2026-04-04 00:53:35.429474 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-04 00:53:35.429478 | orchestrator | Saturday 04 April 2026 00:46:25 +0000 (0:00:00.850) 0:02:40.885 ******** 2026-04-04 00:53:35.429482 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.429486 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.429490 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.429494 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.429498 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.429501 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.429505 | orchestrator | 2026-04-04 00:53:35.429509 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-04 00:53:35.429513 | orchestrator | Saturday 04 April 2026 00:46:26 +0000 (0:00:01.320) 0:02:42.205 ******** 2026-04-04 00:53:35.429516 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.429520 | orchestrator | 2026-04-04 00:53:35.429524 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-04 00:53:35.429528 | orchestrator | Saturday 04 April 2026 00:46:27 +0000 (0:00:01.395) 0:02:43.601 ******** 2026-04-04 00:53:35.429532 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-04 00:53:35.429536 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-04 00:53:35.429539 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-04 00:53:35.429543 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-04 00:53:35.429547 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429551 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-04 00:53:35.429554 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-04 00:53:35.429558 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429562 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429565 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429580 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-04 00:53:35.429584 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429592 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429595 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429613 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-04 00:53:35.429618 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429622 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429626 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429629 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429633 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429637 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-04 00:53:35.429640 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429648 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429652 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429655 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-04 00:53:35.429663 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429670 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429674 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429678 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429681 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429685 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-04 00:53:35.429689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429695 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429699 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429703 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429711 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-04 00:53:35.429714 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429718 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429722 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429725 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429729 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429733 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-04 00:53:35.429736 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429740 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429744 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429751 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429755 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429758 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429762 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-04 00:53:35.429766 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429769 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429776 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429783 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429787 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-04 00:53:35.429791 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429794 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429798 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429802 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429805 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-04 00:53:35.429813 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429817 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429820 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429824 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429828 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429832 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-04 00:53:35.429847 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429851 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429855 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429863 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-04 00:53:35.429870 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429874 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-04 00:53:35.429878 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-04 00:53:35.429881 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-04 00:53:35.429885 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-04 00:53:35.429889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-04 00:53:35.429892 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-04 00:53:35.429896 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-04 00:53:35.429900 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-04 00:53:35.429904 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-04 00:53:35.429907 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-04 00:53:35.429911 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-04 00:53:35.429915 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-04 00:53:35.429918 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-04 00:53:35.429922 | orchestrator | 2026-04-04 00:53:35.429928 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-04 00:53:35.429932 | orchestrator | Saturday 04 April 2026 00:46:35 +0000 (0:00:07.179) 0:02:50.780 ******** 2026-04-04 00:53:35.429936 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.429939 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.429943 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.429957 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.429961 | orchestrator | 2026-04-04 00:53:35.429965 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-04 00:53:35.429968 | orchestrator | Saturday 04 April 2026 00:46:36 +0000 (0:00:00.964) 0:02:51.745 ******** 2026-04-04 00:53:35.429972 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.429976 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.429980 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.429984 | orchestrator | 2026-04-04 00:53:35.429987 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-04 00:53:35.429991 | orchestrator | Saturday 04 April 2026 00:46:36 +0000 (0:00:00.755) 0:02:52.501 ******** 2026-04-04 00:53:35.429995 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.429999 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.430003 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.430006 | orchestrator | 2026-04-04 00:53:35.430010 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-04 00:53:35.430064 | orchestrator | Saturday 04 April 2026 00:46:38 +0000 (0:00:01.396) 0:02:53.898 ******** 2026-04-04 00:53:35.430069 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430073 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430077 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430081 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430085 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430088 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430092 | orchestrator | 2026-04-04 00:53:35.430096 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-04 00:53:35.430100 | orchestrator | Saturday 04 April 2026 00:46:38 +0000 (0:00:00.656) 0:02:54.555 ******** 2026-04-04 00:53:35.430104 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430108 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430112 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430115 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430119 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430123 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430127 | orchestrator | 2026-04-04 00:53:35.430131 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-04 00:53:35.430135 | orchestrator | Saturday 04 April 2026 00:46:39 +0000 (0:00:00.706) 0:02:55.262 ******** 2026-04-04 00:53:35.430139 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430143 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430146 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430150 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430154 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430158 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430162 | orchestrator | 2026-04-04 00:53:35.430181 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-04 00:53:35.430185 | orchestrator | Saturday 04 April 2026 00:46:40 +0000 (0:00:00.714) 0:02:55.976 ******** 2026-04-04 00:53:35.430189 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430193 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430196 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430200 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430207 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430211 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430215 | orchestrator | 2026-04-04 00:53:35.430218 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-04 00:53:35.430222 | orchestrator | Saturday 04 April 2026 00:46:40 +0000 (0:00:00.576) 0:02:56.553 ******** 2026-04-04 00:53:35.430226 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430230 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430233 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430237 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430241 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430244 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430248 | orchestrator | 2026-04-04 00:53:35.430252 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-04 00:53:35.430256 | orchestrator | Saturday 04 April 2026 00:46:41 +0000 (0:00:00.758) 0:02:57.311 ******** 2026-04-04 00:53:35.430260 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430263 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430267 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430271 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430274 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430278 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430282 | orchestrator | 2026-04-04 00:53:35.430286 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-04 00:53:35.430289 | orchestrator | Saturday 04 April 2026 00:46:42 +0000 (0:00:00.773) 0:02:58.084 ******** 2026-04-04 00:53:35.430293 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430301 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430305 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430308 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430312 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430316 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430320 | orchestrator | 2026-04-04 00:53:35.430323 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-04 00:53:35.430327 | orchestrator | Saturday 04 April 2026 00:46:43 +0000 (0:00:01.048) 0:02:59.133 ******** 2026-04-04 00:53:35.430331 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430335 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430338 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430342 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430346 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430349 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430353 | orchestrator | 2026-04-04 00:53:35.430357 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-04 00:53:35.430361 | orchestrator | Saturday 04 April 2026 00:46:44 +0000 (0:00:00.708) 0:02:59.842 ******** 2026-04-04 00:53:35.430364 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430368 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430372 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430376 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430379 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430383 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430387 | orchestrator | 2026-04-04 00:53:35.430391 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-04 00:53:35.430394 | orchestrator | Saturday 04 April 2026 00:46:46 +0000 (0:00:01.939) 0:03:01.782 ******** 2026-04-04 00:53:35.430398 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430402 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430406 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430409 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430413 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430418 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430434 | orchestrator | 2026-04-04 00:53:35.430442 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-04 00:53:35.430448 | orchestrator | Saturday 04 April 2026 00:46:46 +0000 (0:00:00.644) 0:03:02.427 ******** 2026-04-04 00:53:35.430454 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430460 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430466 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430472 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430478 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430485 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430491 | orchestrator | 2026-04-04 00:53:35.430495 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-04 00:53:35.430499 | orchestrator | Saturday 04 April 2026 00:46:47 +0000 (0:00:01.059) 0:03:03.487 ******** 2026-04-04 00:53:35.430503 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430506 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430510 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430514 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430517 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430521 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430525 | orchestrator | 2026-04-04 00:53:35.430528 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-04 00:53:35.430532 | orchestrator | Saturday 04 April 2026 00:46:48 +0000 (0:00:00.734) 0:03:04.221 ******** 2026-04-04 00:53:35.430536 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.430540 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.430544 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.430547 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430566 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430571 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430575 | orchestrator | 2026-04-04 00:53:35.430579 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-04 00:53:35.430582 | orchestrator | Saturday 04 April 2026 00:46:50 +0000 (0:00:01.493) 0:03:05.714 ******** 2026-04-04 00:53:35.430587 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-04 00:53:35.430592 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-04 00:53:35.430597 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430601 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-04 00:53:35.430607 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-04 00:53:35.430611 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430615 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-04 00:53:35.430623 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-04 00:53:35.430627 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430630 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430634 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430638 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430642 | orchestrator | 2026-04-04 00:53:35.430645 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-04 00:53:35.430649 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:01.039) 0:03:06.753 ******** 2026-04-04 00:53:35.430653 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430657 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430660 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430664 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430668 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430675 | orchestrator | 2026-04-04 00:53:35.430679 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-04 00:53:35.430683 | orchestrator | Saturday 04 April 2026 00:46:51 +0000 (0:00:00.860) 0:03:07.614 ******** 2026-04-04 00:53:35.430687 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430690 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430694 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430698 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430701 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430705 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430709 | orchestrator | 2026-04-04 00:53:35.430713 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:53:35.430717 | orchestrator | Saturday 04 April 2026 00:46:52 +0000 (0:00:00.619) 0:03:08.233 ******** 2026-04-04 00:53:35.430720 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430724 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430728 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430731 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430735 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430739 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430742 | orchestrator | 2026-04-04 00:53:35.430746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:53:35.430750 | orchestrator | Saturday 04 April 2026 00:46:53 +0000 (0:00:00.857) 0:03:09.091 ******** 2026-04-04 00:53:35.430754 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430757 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430761 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430765 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430768 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430772 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430776 | orchestrator | 2026-04-04 00:53:35.430780 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:53:35.430795 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:00.613) 0:03:09.704 ******** 2026-04-04 00:53:35.430799 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430803 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.430807 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.430810 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430817 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430820 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430824 | orchestrator | 2026-04-04 00:53:35.430828 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:53:35.430831 | orchestrator | Saturday 04 April 2026 00:46:54 +0000 (0:00:00.710) 0:03:10.415 ******** 2026-04-04 00:53:35.430835 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.430839 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.430843 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.430846 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.430850 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.430854 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.430858 | orchestrator | 2026-04-04 00:53:35.430861 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:53:35.430865 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.713) 0:03:11.128 ******** 2026-04-04 00:53:35.430869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.430872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.430876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.430880 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430884 | orchestrator | 2026-04-04 00:53:35.430887 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:53:35.430891 | orchestrator | Saturday 04 April 2026 00:46:55 +0000 (0:00:00.434) 0:03:11.563 ******** 2026-04-04 00:53:35.430895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.430902 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.430912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.430919 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430926 | orchestrator | 2026-04-04 00:53:35.430932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:53:35.430940 | orchestrator | Saturday 04 April 2026 00:46:56 +0000 (0:00:00.512) 0:03:12.075 ******** 2026-04-04 00:53:35.430946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.430953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.430960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.430968 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.430975 | orchestrator | 2026-04-04 00:53:35.430982 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:53:35.430989 | orchestrator | Saturday 04 April 2026 00:46:57 +0000 (0:00:00.685) 0:03:12.761 ******** 2026-04-04 00:53:35.430996 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.431003 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.431009 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.431016 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.431037 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.431044 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.431050 | orchestrator | 2026-04-04 00:53:35.431057 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:53:35.431063 | orchestrator | Saturday 04 April 2026 00:46:57 +0000 (0:00:00.706) 0:03:13.468 ******** 2026-04-04 00:53:35.431070 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:53:35.431076 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-04 00:53:35.431082 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:53:35.431089 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:53:35.431095 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-04 00:53:35.431102 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.431108 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.431114 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-04 00:53:35.431121 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.431132 | orchestrator | 2026-04-04 00:53:35.431138 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-04 00:53:35.431145 | orchestrator | Saturday 04 April 2026 00:46:59 +0000 (0:00:01.563) 0:03:15.032 ******** 2026-04-04 00:53:35.431151 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.431158 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.431164 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.431171 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.431177 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.431183 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.431190 | orchestrator | 2026-04-04 00:53:35.431196 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.431203 | orchestrator | Saturday 04 April 2026 00:47:01 +0000 (0:00:02.611) 0:03:17.644 ******** 2026-04-04 00:53:35.431209 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.431215 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.431222 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.431228 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.431235 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.431241 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.431247 | orchestrator | 2026-04-04 00:53:35.431253 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-04 00:53:35.431259 | orchestrator | Saturday 04 April 2026 00:47:03 +0000 (0:00:01.661) 0:03:19.306 ******** 2026-04-04 00:53:35.431266 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431272 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.431278 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.431285 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.431291 | orchestrator | 2026-04-04 00:53:35.431298 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-04 00:53:35.431324 | orchestrator | Saturday 04 April 2026 00:47:04 +0000 (0:00:00.855) 0:03:20.161 ******** 2026-04-04 00:53:35.431331 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.431338 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.431345 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.431351 | orchestrator | 2026-04-04 00:53:35.431357 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-04 00:53:35.431364 | orchestrator | Saturday 04 April 2026 00:47:04 +0000 (0:00:00.290) 0:03:20.451 ******** 2026-04-04 00:53:35.431370 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.431377 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.431383 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.431389 | orchestrator | 2026-04-04 00:53:35.431396 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-04 00:53:35.431402 | orchestrator | Saturday 04 April 2026 00:47:05 +0000 (0:00:01.179) 0:03:21.631 ******** 2026-04-04 00:53:35.431409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:53:35.431415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:53:35.431422 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:53:35.431428 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.431434 | orchestrator | 2026-04-04 00:53:35.431441 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-04 00:53:35.431447 | orchestrator | Saturday 04 April 2026 00:47:06 +0000 (0:00:00.763) 0:03:22.395 ******** 2026-04-04 00:53:35.431454 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.431460 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.431467 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.431473 | orchestrator | 2026-04-04 00:53:35.431480 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-04 00:53:35.431486 | orchestrator | Saturday 04 April 2026 00:47:07 +0000 (0:00:00.331) 0:03:22.726 ******** 2026-04-04 00:53:35.431493 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.431503 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.431510 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.431519 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.431526 | orchestrator | 2026-04-04 00:53:35.431533 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-04 00:53:35.431539 | orchestrator | Saturday 04 April 2026 00:47:08 +0000 (0:00:00.977) 0:03:23.703 ******** 2026-04-04 00:53:35.431545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.431552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.431558 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.431564 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431571 | orchestrator | 2026-04-04 00:53:35.431577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-04 00:53:35.431584 | orchestrator | Saturday 04 April 2026 00:47:08 +0000 (0:00:00.393) 0:03:24.096 ******** 2026-04-04 00:53:35.431590 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.431597 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.431603 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431609 | orchestrator | 2026-04-04 00:53:35.431616 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-04 00:53:35.431622 | orchestrator | Saturday 04 April 2026 00:47:08 +0000 (0:00:00.491) 0:03:24.588 ******** 2026-04-04 00:53:35.431628 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431635 | orchestrator | 2026-04-04 00:53:35.431641 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-04 00:53:35.431647 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:00.210) 0:03:24.798 ******** 2026-04-04 00:53:35.431653 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431660 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.431666 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.431673 | orchestrator | 2026-04-04 00:53:35.431679 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-04 00:53:35.431685 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:00.292) 0:03:25.091 ******** 2026-04-04 00:53:35.431692 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431698 | orchestrator | 2026-04-04 00:53:35.431704 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-04 00:53:35.431711 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:00.200) 0:03:25.291 ******** 2026-04-04 00:53:35.431717 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431723 | orchestrator | 2026-04-04 00:53:35.431729 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-04 00:53:35.431736 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:00.197) 0:03:25.489 ******** 2026-04-04 00:53:35.431742 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431749 | orchestrator | 2026-04-04 00:53:35.431756 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-04 00:53:35.431762 | orchestrator | Saturday 04 April 2026 00:47:09 +0000 (0:00:00.096) 0:03:25.585 ******** 2026-04-04 00:53:35.431769 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431775 | orchestrator | 2026-04-04 00:53:35.431782 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-04 00:53:35.431789 | orchestrator | Saturday 04 April 2026 00:47:10 +0000 (0:00:00.184) 0:03:25.770 ******** 2026-04-04 00:53:35.431796 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431803 | orchestrator | 2026-04-04 00:53:35.431809 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-04 00:53:35.431816 | orchestrator | Saturday 04 April 2026 00:47:10 +0000 (0:00:00.186) 0:03:25.956 ******** 2026-04-04 00:53:35.431823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.431830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.431840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.431847 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431853 | orchestrator | 2026-04-04 00:53:35.431859 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-04 00:53:35.431885 | orchestrator | Saturday 04 April 2026 00:47:10 +0000 (0:00:00.522) 0:03:26.479 ******** 2026-04-04 00:53:35.431892 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431898 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.431903 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.431909 | orchestrator | 2026-04-04 00:53:35.431915 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-04 00:53:35.431921 | orchestrator | Saturday 04 April 2026 00:47:11 +0000 (0:00:00.482) 0:03:26.962 ******** 2026-04-04 00:53:35.431928 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431934 | orchestrator | 2026-04-04 00:53:35.431941 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-04 00:53:35.431947 | orchestrator | Saturday 04 April 2026 00:47:11 +0000 (0:00:00.217) 0:03:27.180 ******** 2026-04-04 00:53:35.431955 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.431960 | orchestrator | 2026-04-04 00:53:35.431967 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-04 00:53:35.431973 | orchestrator | Saturday 04 April 2026 00:47:11 +0000 (0:00:00.207) 0:03:27.387 ******** 2026-04-04 00:53:35.431979 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.431985 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.431992 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.431999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.432005 | orchestrator | 2026-04-04 00:53:35.432012 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-04 00:53:35.432019 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:00.761) 0:03:28.149 ******** 2026-04-04 00:53:35.432034 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.432038 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.432041 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.432045 | orchestrator | 2026-04-04 00:53:35.432049 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-04 00:53:35.432056 | orchestrator | Saturday 04 April 2026 00:47:12 +0000 (0:00:00.527) 0:03:28.676 ******** 2026-04-04 00:53:35.432060 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.432064 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.432068 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.432072 | orchestrator | 2026-04-04 00:53:35.432076 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-04 00:53:35.432079 | orchestrator | Saturday 04 April 2026 00:47:14 +0000 (0:00:01.440) 0:03:30.117 ******** 2026-04-04 00:53:35.432083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.432087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.432091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.432095 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.432098 | orchestrator | 2026-04-04 00:53:35.432102 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-04 00:53:35.432106 | orchestrator | Saturday 04 April 2026 00:47:14 +0000 (0:00:00.562) 0:03:30.680 ******** 2026-04-04 00:53:35.432110 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.432114 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.432117 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.432121 | orchestrator | 2026-04-04 00:53:35.432125 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-04 00:53:35.432129 | orchestrator | Saturday 04 April 2026 00:47:15 +0000 (0:00:00.267) 0:03:30.947 ******** 2026-04-04 00:53:35.432132 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432140 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432144 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432148 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.432152 | orchestrator | 2026-04-04 00:53:35.432156 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-04 00:53:35.432160 | orchestrator | Saturday 04 April 2026 00:47:16 +0000 (0:00:00.908) 0:03:31.856 ******** 2026-04-04 00:53:35.432163 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.432167 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.432171 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.432175 | orchestrator | 2026-04-04 00:53:35.432179 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-04 00:53:35.432182 | orchestrator | Saturday 04 April 2026 00:47:16 +0000 (0:00:00.325) 0:03:32.181 ******** 2026-04-04 00:53:35.432186 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.432190 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.432194 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.432197 | orchestrator | 2026-04-04 00:53:35.432201 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-04 00:53:35.432205 | orchestrator | Saturday 04 April 2026 00:47:17 +0000 (0:00:01.319) 0:03:33.501 ******** 2026-04-04 00:53:35.432209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.432213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.432216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.432220 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.432224 | orchestrator | 2026-04-04 00:53:35.432228 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-04 00:53:35.432231 | orchestrator | Saturday 04 April 2026 00:47:18 +0000 (0:00:00.699) 0:03:34.201 ******** 2026-04-04 00:53:35.432235 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.432239 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.432243 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.432248 | orchestrator | 2026-04-04 00:53:35.432255 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-04 00:53:35.432261 | orchestrator | Saturday 04 April 2026 00:47:18 +0000 (0:00:00.328) 0:03:34.529 ******** 2026-04-04 00:53:35.432267 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.432274 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.432281 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.432287 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432294 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432316 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432320 | orchestrator | 2026-04-04 00:53:35.432324 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-04 00:53:35.432328 | orchestrator | Saturday 04 April 2026 00:47:19 +0000 (0:00:00.868) 0:03:35.397 ******** 2026-04-04 00:53:35.432332 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.432335 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.432339 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.432343 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.432347 | orchestrator | 2026-04-04 00:53:35.432350 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-04 00:53:35.432354 | orchestrator | Saturday 04 April 2026 00:47:20 +0000 (0:00:01.027) 0:03:36.425 ******** 2026-04-04 00:53:35.432358 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432362 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432365 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432369 | orchestrator | 2026-04-04 00:53:35.432373 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-04 00:53:35.432377 | orchestrator | Saturday 04 April 2026 00:47:20 +0000 (0:00:00.249) 0:03:36.674 ******** 2026-04-04 00:53:35.432384 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.432387 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.432391 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.432397 | orchestrator | 2026-04-04 00:53:35.432404 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-04 00:53:35.432410 | orchestrator | Saturday 04 April 2026 00:47:22 +0000 (0:00:01.190) 0:03:37.865 ******** 2026-04-04 00:53:35.432417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:53:35.432424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:53:35.432431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:53:35.432438 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432445 | orchestrator | 2026-04-04 00:53:35.432456 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-04 00:53:35.432463 | orchestrator | Saturday 04 April 2026 00:47:22 +0000 (0:00:00.596) 0:03:38.462 ******** 2026-04-04 00:53:35.432471 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432478 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432485 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432489 | orchestrator | 2026-04-04 00:53:35.432493 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-04 00:53:35.432497 | orchestrator | 2026-04-04 00:53:35.432501 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.432504 | orchestrator | Saturday 04 April 2026 00:47:23 +0000 (0:00:00.413) 0:03:38.875 ******** 2026-04-04 00:53:35.432508 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.432512 | orchestrator | 2026-04-04 00:53:35.432516 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.432520 | orchestrator | Saturday 04 April 2026 00:47:23 +0000 (0:00:00.633) 0:03:39.508 ******** 2026-04-04 00:53:35.432523 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.432527 | orchestrator | 2026-04-04 00:53:35.432531 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.432535 | orchestrator | Saturday 04 April 2026 00:47:24 +0000 (0:00:00.512) 0:03:40.021 ******** 2026-04-04 00:53:35.432538 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432542 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432546 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432550 | orchestrator | 2026-04-04 00:53:35.432553 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.432557 | orchestrator | Saturday 04 April 2026 00:47:25 +0000 (0:00:00.754) 0:03:40.775 ******** 2026-04-04 00:53:35.432561 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432565 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432569 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432573 | orchestrator | 2026-04-04 00:53:35.432576 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.432580 | orchestrator | Saturday 04 April 2026 00:47:25 +0000 (0:00:00.727) 0:03:41.503 ******** 2026-04-04 00:53:35.432584 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432588 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432591 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432595 | orchestrator | 2026-04-04 00:53:35.432599 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.432603 | orchestrator | Saturday 04 April 2026 00:47:26 +0000 (0:00:00.370) 0:03:41.874 ******** 2026-04-04 00:53:35.432606 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432610 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432614 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432618 | orchestrator | 2026-04-04 00:53:35.432621 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.432628 | orchestrator | Saturday 04 April 2026 00:47:26 +0000 (0:00:00.379) 0:03:42.254 ******** 2026-04-04 00:53:35.432632 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432636 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432640 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432643 | orchestrator | 2026-04-04 00:53:35.432647 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.432651 | orchestrator | Saturday 04 April 2026 00:47:27 +0000 (0:00:00.683) 0:03:42.938 ******** 2026-04-04 00:53:35.432655 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432658 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432662 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432666 | orchestrator | 2026-04-04 00:53:35.432670 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.432673 | orchestrator | Saturday 04 April 2026 00:47:27 +0000 (0:00:00.330) 0:03:43.269 ******** 2026-04-04 00:53:35.432692 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432697 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432701 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432704 | orchestrator | 2026-04-04 00:53:35.432708 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.432712 | orchestrator | Saturday 04 April 2026 00:47:28 +0000 (0:00:00.525) 0:03:43.795 ******** 2026-04-04 00:53:35.432716 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432720 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432723 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432727 | orchestrator | 2026-04-04 00:53:35.432731 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.432736 | orchestrator | Saturday 04 April 2026 00:47:28 +0000 (0:00:00.657) 0:03:44.452 ******** 2026-04-04 00:53:35.432742 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432749 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432756 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432763 | orchestrator | 2026-04-04 00:53:35.432770 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.432777 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.718) 0:03:45.171 ******** 2026-04-04 00:53:35.432784 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432791 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432798 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432805 | orchestrator | 2026-04-04 00:53:35.432813 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.432820 | orchestrator | Saturday 04 April 2026 00:47:29 +0000 (0:00:00.261) 0:03:45.433 ******** 2026-04-04 00:53:35.432826 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.432833 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.432840 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.432848 | orchestrator | 2026-04-04 00:53:35.432855 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.432862 | orchestrator | Saturday 04 April 2026 00:47:30 +0000 (0:00:00.516) 0:03:45.949 ******** 2026-04-04 00:53:35.432869 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432882 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432889 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432896 | orchestrator | 2026-04-04 00:53:35.432903 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.432910 | orchestrator | Saturday 04 April 2026 00:47:30 +0000 (0:00:00.349) 0:03:46.299 ******** 2026-04-04 00:53:35.432918 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432925 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432932 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432939 | orchestrator | 2026-04-04 00:53:35.432946 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.432953 | orchestrator | Saturday 04 April 2026 00:47:30 +0000 (0:00:00.266) 0:03:46.565 ******** 2026-04-04 00:53:35.432965 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.432972 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.432979 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.432986 | orchestrator | 2026-04-04 00:53:35.432993 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.433000 | orchestrator | Saturday 04 April 2026 00:47:31 +0000 (0:00:00.258) 0:03:46.824 ******** 2026-04-04 00:53:35.433007 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433014 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.433020 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.433043 | orchestrator | 2026-04-04 00:53:35.433049 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.433056 | orchestrator | Saturday 04 April 2026 00:47:31 +0000 (0:00:00.428) 0:03:47.252 ******** 2026-04-04 00:53:35.433062 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433069 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.433075 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.433082 | orchestrator | 2026-04-04 00:53:35.433088 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.433095 | orchestrator | Saturday 04 April 2026 00:47:31 +0000 (0:00:00.303) 0:03:47.556 ******** 2026-04-04 00:53:35.433101 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433107 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433114 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433120 | orchestrator | 2026-04-04 00:53:35.433127 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.433133 | orchestrator | Saturday 04 April 2026 00:47:32 +0000 (0:00:00.308) 0:03:47.864 ******** 2026-04-04 00:53:35.433140 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433146 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433152 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433159 | orchestrator | 2026-04-04 00:53:35.433165 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.433172 | orchestrator | Saturday 04 April 2026 00:47:32 +0000 (0:00:00.339) 0:03:48.204 ******** 2026-04-04 00:53:35.433178 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433185 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433191 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433198 | orchestrator | 2026-04-04 00:53:35.433204 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:53:35.433210 | orchestrator | Saturday 04 April 2026 00:47:33 +0000 (0:00:00.694) 0:03:48.898 ******** 2026-04-04 00:53:35.433217 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433223 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433230 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433236 | orchestrator | 2026-04-04 00:53:35.433242 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-04 00:53:35.433248 | orchestrator | Saturday 04 April 2026 00:47:33 +0000 (0:00:00.335) 0:03:49.233 ******** 2026-04-04 00:53:35.433254 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.433261 | orchestrator | 2026-04-04 00:53:35.433267 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-04 00:53:35.433273 | orchestrator | Saturday 04 April 2026 00:47:34 +0000 (0:00:00.594) 0:03:49.827 ******** 2026-04-04 00:53:35.433280 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433286 | orchestrator | 2026-04-04 00:53:35.433314 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-04 00:53:35.433321 | orchestrator | Saturday 04 April 2026 00:47:34 +0000 (0:00:00.346) 0:03:50.174 ******** 2026-04-04 00:53:35.433327 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-04 00:53:35.433334 | orchestrator | 2026-04-04 00:53:35.433340 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-04 00:53:35.433351 | orchestrator | Saturday 04 April 2026 00:47:35 +0000 (0:00:01.091) 0:03:51.265 ******** 2026-04-04 00:53:35.433358 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433364 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433371 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433377 | orchestrator | 2026-04-04 00:53:35.433383 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-04 00:53:35.433390 | orchestrator | Saturday 04 April 2026 00:47:35 +0000 (0:00:00.366) 0:03:51.632 ******** 2026-04-04 00:53:35.433396 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433402 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433409 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433415 | orchestrator | 2026-04-04 00:53:35.433422 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-04 00:53:35.433428 | orchestrator | Saturday 04 April 2026 00:47:36 +0000 (0:00:00.333) 0:03:51.965 ******** 2026-04-04 00:53:35.433434 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433440 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433447 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433453 | orchestrator | 2026-04-04 00:53:35.433460 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-04 00:53:35.433467 | orchestrator | Saturday 04 April 2026 00:47:37 +0000 (0:00:01.217) 0:03:53.183 ******** 2026-04-04 00:53:35.433473 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433480 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433486 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433493 | orchestrator | 2026-04-04 00:53:35.433499 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-04 00:53:35.433509 | orchestrator | Saturday 04 April 2026 00:47:38 +0000 (0:00:01.023) 0:03:54.207 ******** 2026-04-04 00:53:35.433516 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433522 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433528 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433534 | orchestrator | 2026-04-04 00:53:35.433541 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-04 00:53:35.433547 | orchestrator | Saturday 04 April 2026 00:47:39 +0000 (0:00:00.627) 0:03:54.834 ******** 2026-04-04 00:53:35.433554 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433560 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433567 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433573 | orchestrator | 2026-04-04 00:53:35.433579 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-04 00:53:35.433586 | orchestrator | Saturday 04 April 2026 00:47:39 +0000 (0:00:00.638) 0:03:55.472 ******** 2026-04-04 00:53:35.433592 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433599 | orchestrator | 2026-04-04 00:53:35.433605 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-04 00:53:35.433611 | orchestrator | Saturday 04 April 2026 00:47:41 +0000 (0:00:01.216) 0:03:56.689 ******** 2026-04-04 00:53:35.433618 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433624 | orchestrator | 2026-04-04 00:53:35.433631 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-04 00:53:35.433637 | orchestrator | Saturday 04 April 2026 00:47:41 +0000 (0:00:00.753) 0:03:57.442 ******** 2026-04-04 00:53:35.433643 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.433650 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.433656 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.433662 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:53:35.433669 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-04 00:53:35.433676 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:53:35.433682 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:53:35.433693 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-04 00:53:35.433700 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:53:35.433707 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-04 00:53:35.433713 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-04 00:53:35.433720 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-04 00:53:35.433727 | orchestrator | 2026-04-04 00:53:35.433733 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-04 00:53:35.433739 | orchestrator | Saturday 04 April 2026 00:47:45 +0000 (0:00:03.604) 0:04:01.047 ******** 2026-04-04 00:53:35.433746 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433752 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433758 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433764 | orchestrator | 2026-04-04 00:53:35.433771 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-04 00:53:35.433777 | orchestrator | Saturday 04 April 2026 00:47:46 +0000 (0:00:01.604) 0:04:02.652 ******** 2026-04-04 00:53:35.433783 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433789 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433796 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433802 | orchestrator | 2026-04-04 00:53:35.433808 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-04 00:53:35.433815 | orchestrator | Saturday 04 April 2026 00:47:47 +0000 (0:00:00.338) 0:04:02.991 ******** 2026-04-04 00:53:35.433821 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.433827 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.433833 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.433840 | orchestrator | 2026-04-04 00:53:35.433846 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-04 00:53:35.433853 | orchestrator | Saturday 04 April 2026 00:47:47 +0000 (0:00:00.294) 0:04:03.285 ******** 2026-04-04 00:53:35.433859 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433886 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433892 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433896 | orchestrator | 2026-04-04 00:53:35.433900 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-04 00:53:35.433904 | orchestrator | Saturday 04 April 2026 00:47:49 +0000 (0:00:01.843) 0:04:05.128 ******** 2026-04-04 00:53:35.433908 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.433912 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.433915 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.433919 | orchestrator | 2026-04-04 00:53:35.433923 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-04 00:53:35.433926 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:01.707) 0:04:06.835 ******** 2026-04-04 00:53:35.433930 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433934 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.433937 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.433941 | orchestrator | 2026-04-04 00:53:35.433945 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-04 00:53:35.433949 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:00.250) 0:04:07.086 ******** 2026-04-04 00:53:35.433952 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.433956 | orchestrator | 2026-04-04 00:53:35.433960 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-04 00:53:35.433963 | orchestrator | Saturday 04 April 2026 00:47:51 +0000 (0:00:00.504) 0:04:07.590 ******** 2026-04-04 00:53:35.433967 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433971 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.433975 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.433978 | orchestrator | 2026-04-04 00:53:35.433982 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-04 00:53:35.433986 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.445) 0:04:08.035 ******** 2026-04-04 00:53:35.433996 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.433999 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434003 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434007 | orchestrator | 2026-04-04 00:53:35.434011 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-04 00:53:35.434044 | orchestrator | Saturday 04 April 2026 00:47:52 +0000 (0:00:00.284) 0:04:08.320 ******** 2026-04-04 00:53:35.434048 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.434052 | orchestrator | 2026-04-04 00:53:35.434055 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-04 00:53:35.434059 | orchestrator | Saturday 04 April 2026 00:47:53 +0000 (0:00:00.461) 0:04:08.781 ******** 2026-04-04 00:53:35.434063 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.434067 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.434070 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.434074 | orchestrator | 2026-04-04 00:53:35.434078 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-04 00:53:35.434082 | orchestrator | Saturday 04 April 2026 00:47:54 +0000 (0:00:01.651) 0:04:10.433 ******** 2026-04-04 00:53:35.434085 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.434089 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.434093 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.434097 | orchestrator | 2026-04-04 00:53:35.434101 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-04 00:53:35.434104 | orchestrator | Saturday 04 April 2026 00:47:56 +0000 (0:00:01.301) 0:04:11.735 ******** 2026-04-04 00:53:35.434108 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.434112 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.434115 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.434119 | orchestrator | 2026-04-04 00:53:35.434123 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-04 00:53:35.434127 | orchestrator | Saturday 04 April 2026 00:47:57 +0000 (0:00:01.890) 0:04:13.625 ******** 2026-04-04 00:53:35.434130 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.434134 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.434138 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.434141 | orchestrator | 2026-04-04 00:53:35.434145 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-04 00:53:35.434149 | orchestrator | Saturday 04 April 2026 00:47:59 +0000 (0:00:01.959) 0:04:15.585 ******** 2026-04-04 00:53:35.434153 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.434157 | orchestrator | 2026-04-04 00:53:35.434161 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-04 00:53:35.434164 | orchestrator | Saturday 04 April 2026 00:48:01 +0000 (0:00:01.783) 0:04:17.368 ******** 2026-04-04 00:53:35.434173 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-04 00:53:35.434177 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434181 | orchestrator | 2026-04-04 00:53:35.434185 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-04 00:53:35.434188 | orchestrator | Saturday 04 April 2026 00:48:23 +0000 (0:00:21.754) 0:04:39.123 ******** 2026-04-04 00:53:35.434192 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434196 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434199 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434203 | orchestrator | 2026-04-04 00:53:35.434207 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-04 00:53:35.434211 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:06.161) 0:04:45.284 ******** 2026-04-04 00:53:35.434215 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434218 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434226 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434230 | orchestrator | 2026-04-04 00:53:35.434233 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-04 00:53:35.434252 | orchestrator | Saturday 04 April 2026 00:48:29 +0000 (0:00:00.253) 0:04:45.538 ******** 2026-04-04 00:53:35.434258 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-04 00:53:35.434265 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-04 00:53:35.434269 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-04 00:53:35.434277 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-04 00:53:35.434281 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-04 00:53:35.434286 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__57ed2735be01f6ab14ea93b328851593f471fdcf'}])  2026-04-04 00:53:35.434291 | orchestrator | 2026-04-04 00:53:35.434295 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.434299 | orchestrator | Saturday 04 April 2026 00:48:39 +0000 (0:00:10.107) 0:04:55.645 ******** 2026-04-04 00:53:35.434302 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434306 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434310 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434313 | orchestrator | 2026-04-04 00:53:35.434317 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-04 00:53:35.434321 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.358) 0:04:56.004 ******** 2026-04-04 00:53:35.434324 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.434328 | orchestrator | 2026-04-04 00:53:35.434332 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-04 00:53:35.434336 | orchestrator | Saturday 04 April 2026 00:48:40 +0000 (0:00:00.442) 0:04:56.447 ******** 2026-04-04 00:53:35.434339 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434343 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434356 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434362 | orchestrator | 2026-04-04 00:53:35.434369 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-04 00:53:35.434375 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.459) 0:04:56.906 ******** 2026-04-04 00:53:35.434381 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434387 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434394 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434400 | orchestrator | 2026-04-04 00:53:35.434412 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-04 00:53:35.434419 | orchestrator | Saturday 04 April 2026 00:48:41 +0000 (0:00:00.305) 0:04:57.211 ******** 2026-04-04 00:53:35.434425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:53:35.434431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:53:35.434437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:53:35.434443 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434450 | orchestrator | 2026-04-04 00:53:35.434456 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-04 00:53:35.434462 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.509) 0:04:57.721 ******** 2026-04-04 00:53:35.434469 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434475 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434499 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434506 | orchestrator | 2026-04-04 00:53:35.434512 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-04 00:53:35.434518 | orchestrator | 2026-04-04 00:53:35.434525 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.434531 | orchestrator | Saturday 04 April 2026 00:48:42 +0000 (0:00:00.636) 0:04:58.357 ******** 2026-04-04 00:53:35.434538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.434544 | orchestrator | 2026-04-04 00:53:35.434550 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.434557 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:00.461) 0:04:58.819 ******** 2026-04-04 00:53:35.434563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.434569 | orchestrator | 2026-04-04 00:53:35.434576 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.434582 | orchestrator | Saturday 04 April 2026 00:48:43 +0000 (0:00:00.447) 0:04:59.267 ******** 2026-04-04 00:53:35.434588 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434595 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434601 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434608 | orchestrator | 2026-04-04 00:53:35.434614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.434620 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:00.850) 0:05:00.117 ******** 2026-04-04 00:53:35.434627 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434633 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434639 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434646 | orchestrator | 2026-04-04 00:53:35.434652 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.434662 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:00.276) 0:05:00.393 ******** 2026-04-04 00:53:35.434668 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434675 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434681 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434688 | orchestrator | 2026-04-04 00:53:35.434694 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.434701 | orchestrator | Saturday 04 April 2026 00:48:44 +0000 (0:00:00.290) 0:05:00.684 ******** 2026-04-04 00:53:35.434712 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434718 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434724 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434731 | orchestrator | 2026-04-04 00:53:35.434737 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.434744 | orchestrator | Saturday 04 April 2026 00:48:45 +0000 (0:00:00.320) 0:05:01.004 ******** 2026-04-04 00:53:35.434750 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434757 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434763 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434769 | orchestrator | 2026-04-04 00:53:35.434776 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.434782 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.831) 0:05:01.836 ******** 2026-04-04 00:53:35.434788 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434795 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434801 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434807 | orchestrator | 2026-04-04 00:53:35.434814 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.434820 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.273) 0:05:02.109 ******** 2026-04-04 00:53:35.434827 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434833 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434839 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434846 | orchestrator | 2026-04-04 00:53:35.434852 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.434859 | orchestrator | Saturday 04 April 2026 00:48:46 +0000 (0:00:00.339) 0:05:02.448 ******** 2026-04-04 00:53:35.434865 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434871 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434878 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434884 | orchestrator | 2026-04-04 00:53:35.434891 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.434897 | orchestrator | Saturday 04 April 2026 00:48:47 +0000 (0:00:00.664) 0:05:03.113 ******** 2026-04-04 00:53:35.434904 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434910 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434916 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434923 | orchestrator | 2026-04-04 00:53:35.434930 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.434936 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.885) 0:05:03.999 ******** 2026-04-04 00:53:35.434943 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.434950 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.434956 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.434962 | orchestrator | 2026-04-04 00:53:35.434967 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.434973 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.323) 0:05:04.323 ******** 2026-04-04 00:53:35.434979 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.434986 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.434993 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.434999 | orchestrator | 2026-04-04 00:53:35.435006 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.435012 | orchestrator | Saturday 04 April 2026 00:48:48 +0000 (0:00:00.290) 0:05:04.614 ******** 2026-04-04 00:53:35.435019 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435035 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435041 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435048 | orchestrator | 2026-04-04 00:53:35.435054 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.435083 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.254) 0:05:04.869 ******** 2026-04-04 00:53:35.435090 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435096 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435107 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435112 | orchestrator | 2026-04-04 00:53:35.435119 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.435125 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.479) 0:05:05.348 ******** 2026-04-04 00:53:35.435132 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435138 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435145 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435151 | orchestrator | 2026-04-04 00:53:35.435158 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.435164 | orchestrator | Saturday 04 April 2026 00:48:49 +0000 (0:00:00.325) 0:05:05.673 ******** 2026-04-04 00:53:35.435171 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435177 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435184 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435190 | orchestrator | 2026-04-04 00:53:35.435196 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.435202 | orchestrator | Saturday 04 April 2026 00:48:50 +0000 (0:00:00.423) 0:05:06.096 ******** 2026-04-04 00:53:35.435209 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435215 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435221 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435227 | orchestrator | 2026-04-04 00:53:35.435233 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.435240 | orchestrator | Saturday 04 April 2026 00:48:50 +0000 (0:00:00.258) 0:05:06.355 ******** 2026-04-04 00:53:35.435246 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.435252 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.435258 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.435265 | orchestrator | 2026-04-04 00:53:35.435271 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.435278 | orchestrator | Saturday 04 April 2026 00:48:50 +0000 (0:00:00.308) 0:05:06.663 ******** 2026-04-04 00:53:35.435288 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.435294 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.435301 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.435308 | orchestrator | 2026-04-04 00:53:35.435314 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.435321 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:00.469) 0:05:07.133 ******** 2026-04-04 00:53:35.435325 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.435329 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.435334 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.435341 | orchestrator | 2026-04-04 00:53:35.435347 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:53:35.435353 | orchestrator | Saturday 04 April 2026 00:48:51 +0000 (0:00:00.485) 0:05:07.618 ******** 2026-04-04 00:53:35.435360 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:53:35.435367 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.435374 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.435380 | orchestrator | 2026-04-04 00:53:35.435386 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-04 00:53:35.435392 | orchestrator | Saturday 04 April 2026 00:48:52 +0000 (0:00:00.720) 0:05:08.339 ******** 2026-04-04 00:53:35.435399 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.435405 | orchestrator | 2026-04-04 00:53:35.435412 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-04 00:53:35.435418 | orchestrator | Saturday 04 April 2026 00:48:53 +0000 (0:00:00.676) 0:05:09.016 ******** 2026-04-04 00:53:35.435425 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.435431 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.435442 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.435449 | orchestrator | 2026-04-04 00:53:35.435455 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-04 00:53:35.435461 | orchestrator | Saturday 04 April 2026 00:48:54 +0000 (0:00:00.790) 0:05:09.807 ******** 2026-04-04 00:53:35.435468 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435474 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435480 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435487 | orchestrator | 2026-04-04 00:53:35.435493 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-04 00:53:35.435499 | orchestrator | Saturday 04 April 2026 00:48:54 +0000 (0:00:00.342) 0:05:10.149 ******** 2026-04-04 00:53:35.435505 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.435512 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.435518 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.435524 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-04 00:53:35.435530 | orchestrator | 2026-04-04 00:53:35.435536 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-04 00:53:35.435543 | orchestrator | Saturday 04 April 2026 00:49:01 +0000 (0:00:07.301) 0:05:17.450 ******** 2026-04-04 00:53:35.435549 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.435556 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.435562 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.435568 | orchestrator | 2026-04-04 00:53:35.435574 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-04 00:53:35.435581 | orchestrator | Saturday 04 April 2026 00:49:02 +0000 (0:00:00.499) 0:05:17.950 ******** 2026-04-04 00:53:35.435587 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 00:53:35.435593 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 00:53:35.435599 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 00:53:35.435606 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.435613 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.435641 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.435649 | orchestrator | 2026-04-04 00:53:35.435655 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:53:35.435662 | orchestrator | Saturday 04 April 2026 00:49:03 +0000 (0:00:01.430) 0:05:19.380 ******** 2026-04-04 00:53:35.435668 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 00:53:35.435675 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 00:53:35.435681 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 00:53:35.435687 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 00:53:35.435693 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-04 00:53:35.435699 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-04 00:53:35.435706 | orchestrator | 2026-04-04 00:53:35.435712 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-04 00:53:35.435718 | orchestrator | Saturday 04 April 2026 00:49:04 +0000 (0:00:01.064) 0:05:20.445 ******** 2026-04-04 00:53:35.435725 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.435731 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.435737 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.435744 | orchestrator | 2026-04-04 00:53:35.435750 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-04 00:53:35.435757 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.657) 0:05:21.102 ******** 2026-04-04 00:53:35.435763 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435769 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435776 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435782 | orchestrator | 2026-04-04 00:53:35.435789 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-04 00:53:35.435803 | orchestrator | Saturday 04 April 2026 00:49:05 +0000 (0:00:00.270) 0:05:21.373 ******** 2026-04-04 00:53:35.435810 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435816 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435822 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435829 | orchestrator | 2026-04-04 00:53:35.435839 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-04 00:53:35.435845 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.480) 0:05:21.853 ******** 2026-04-04 00:53:35.435852 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.435858 | orchestrator | 2026-04-04 00:53:35.435865 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-04 00:53:35.435871 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.438) 0:05:22.291 ******** 2026-04-04 00:53:35.435877 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435883 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435890 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435897 | orchestrator | 2026-04-04 00:53:35.435903 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-04 00:53:35.435910 | orchestrator | Saturday 04 April 2026 00:49:06 +0000 (0:00:00.295) 0:05:22.587 ******** 2026-04-04 00:53:35.435916 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.435923 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.435929 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.435935 | orchestrator | 2026-04-04 00:53:35.435942 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-04 00:53:35.435948 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.458) 0:05:23.045 ******** 2026-04-04 00:53:35.435954 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-04-04 00:53:35.435960 | orchestrator | 2026-04-04 00:53:35.435967 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-04 00:53:35.435973 | orchestrator | Saturday 04 April 2026 00:49:07 +0000 (0:00:00.474) 0:05:23.520 ******** 2026-04-04 00:53:35.435980 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.435986 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.435992 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.435998 | orchestrator | 2026-04-04 00:53:35.436005 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-04 00:53:35.436011 | orchestrator | Saturday 04 April 2026 00:49:09 +0000 (0:00:01.198) 0:05:24.719 ******** 2026-04-04 00:53:35.436017 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.436053 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.436061 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.436067 | orchestrator | 2026-04-04 00:53:35.436074 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-04 00:53:35.436080 | orchestrator | Saturday 04 April 2026 00:49:10 +0000 (0:00:01.521) 0:05:26.240 ******** 2026-04-04 00:53:35.436087 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.436093 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.436100 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.436106 | orchestrator | 2026-04-04 00:53:35.436112 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-04 00:53:35.436119 | orchestrator | Saturday 04 April 2026 00:49:12 +0000 (0:00:01.470) 0:05:27.710 ******** 2026-04-04 00:53:35.436125 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.436131 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.436138 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.436144 | orchestrator | 2026-04-04 00:53:35.436150 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-04 00:53:35.436157 | orchestrator | Saturday 04 April 2026 00:49:13 +0000 (0:00:01.796) 0:05:29.507 ******** 2026-04-04 00:53:35.436163 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.436174 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.436180 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-04 00:53:35.436186 | orchestrator | 2026-04-04 00:53:35.436193 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-04 00:53:35.436199 | orchestrator | Saturday 04 April 2026 00:49:14 +0000 (0:00:00.405) 0:05:29.913 ******** 2026-04-04 00:53:35.436229 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-04 00:53:35.436237 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-04 00:53:35.436243 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.436249 | orchestrator | 2026-04-04 00:53:35.436255 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-04 00:53:35.436261 | orchestrator | Saturday 04 April 2026 00:49:27 +0000 (0:00:13.463) 0:05:43.376 ******** 2026-04-04 00:53:35.436268 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.436274 | orchestrator | 2026-04-04 00:53:35.436281 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-04 00:53:35.436287 | orchestrator | Saturday 04 April 2026 00:49:28 +0000 (0:00:01.224) 0:05:44.601 ******** 2026-04-04 00:53:35.436294 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.436300 | orchestrator | 2026-04-04 00:53:35.436307 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-04 00:53:35.436313 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.316) 0:05:44.918 ******** 2026-04-04 00:53:35.436319 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.436326 | orchestrator | 2026-04-04 00:53:35.436332 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-04 00:53:35.436338 | orchestrator | Saturday 04 April 2026 00:49:29 +0000 (0:00:00.140) 0:05:45.059 ******** 2026-04-04 00:53:35.436344 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-04 00:53:35.436350 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-04 00:53:35.436356 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-04 00:53:35.436363 | orchestrator | 2026-04-04 00:53:35.436369 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-04 00:53:35.436379 | orchestrator | Saturday 04 April 2026 00:49:35 +0000 (0:00:06.023) 0:05:51.083 ******** 2026-04-04 00:53:35.436386 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-04 00:53:35.436392 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-04 00:53:35.436399 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-04 00:53:35.436405 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-04 00:53:35.436411 | orchestrator | 2026-04-04 00:53:35.436417 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.436423 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:04.615) 0:05:55.698 ******** 2026-04-04 00:53:35.436429 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.436436 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.436442 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.436449 | orchestrator | 2026-04-04 00:53:35.436455 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-04 00:53:35.436461 | orchestrator | Saturday 04 April 2026 00:49:40 +0000 (0:00:00.831) 0:05:56.530 ******** 2026-04-04 00:53:35.436468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.436474 | orchestrator | 2026-04-04 00:53:35.436481 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-04 00:53:35.436487 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:00.475) 0:05:57.005 ******** 2026-04-04 00:53:35.436497 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.436503 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.436510 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.436516 | orchestrator | 2026-04-04 00:53:35.436523 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-04 00:53:35.436530 | orchestrator | Saturday 04 April 2026 00:49:41 +0000 (0:00:00.269) 0:05:57.274 ******** 2026-04-04 00:53:35.436536 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.436542 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.436549 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.436555 | orchestrator | 2026-04-04 00:53:35.436561 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-04 00:53:35.436568 | orchestrator | Saturday 04 April 2026 00:49:42 +0000 (0:00:01.359) 0:05:58.633 ******** 2026-04-04 00:53:35.436574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-04 00:53:35.436580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-04 00:53:35.436586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-04 00:53:35.436593 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.436600 | orchestrator | 2026-04-04 00:53:35.436607 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-04 00:53:35.436613 | orchestrator | Saturday 04 April 2026 00:49:43 +0000 (0:00:00.533) 0:05:59.166 ******** 2026-04-04 00:53:35.436620 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.436626 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.436633 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.436640 | orchestrator | 2026-04-04 00:53:35.436646 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-04 00:53:35.436653 | orchestrator | 2026-04-04 00:53:35.436659 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.436665 | orchestrator | Saturday 04 April 2026 00:49:43 +0000 (0:00:00.445) 0:05:59.612 ******** 2026-04-04 00:53:35.436672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.436678 | orchestrator | 2026-04-04 00:53:35.436684 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.436690 | orchestrator | Saturday 04 April 2026 00:49:44 +0000 (0:00:00.582) 0:06:00.194 ******** 2026-04-04 00:53:35.436717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.436724 | orchestrator | 2026-04-04 00:53:35.436730 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.436736 | orchestrator | Saturday 04 April 2026 00:49:44 +0000 (0:00:00.437) 0:06:00.632 ******** 2026-04-04 00:53:35.436742 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.436749 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.436755 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.436761 | orchestrator | 2026-04-04 00:53:35.436768 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.436774 | orchestrator | Saturday 04 April 2026 00:49:45 +0000 (0:00:00.304) 0:06:00.936 ******** 2026-04-04 00:53:35.436780 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.436787 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.436793 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.436799 | orchestrator | 2026-04-04 00:53:35.436805 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.436811 | orchestrator | Saturday 04 April 2026 00:49:46 +0000 (0:00:00.861) 0:06:01.798 ******** 2026-04-04 00:53:35.436818 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.436824 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.436830 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.436836 | orchestrator | 2026-04-04 00:53:35.436842 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.436854 | orchestrator | Saturday 04 April 2026 00:49:46 +0000 (0:00:00.678) 0:06:02.477 ******** 2026-04-04 00:53:35.436860 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.436867 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.436873 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.436879 | orchestrator | 2026-04-04 00:53:35.436885 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.436892 | orchestrator | Saturday 04 April 2026 00:49:47 +0000 (0:00:00.651) 0:06:03.128 ******** 2026-04-04 00:53:35.436898 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.436904 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.436914 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.436920 | orchestrator | 2026-04-04 00:53:35.436925 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.436929 | orchestrator | Saturday 04 April 2026 00:49:47 +0000 (0:00:00.258) 0:06:03.387 ******** 2026-04-04 00:53:35.436933 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.436937 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.436940 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.436944 | orchestrator | 2026-04-04 00:53:35.436948 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.436951 | orchestrator | Saturday 04 April 2026 00:49:48 +0000 (0:00:00.439) 0:06:03.827 ******** 2026-04-04 00:53:35.436955 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.436959 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.436963 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.436966 | orchestrator | 2026-04-04 00:53:35.436970 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.436974 | orchestrator | Saturday 04 April 2026 00:49:48 +0000 (0:00:00.274) 0:06:04.102 ******** 2026-04-04 00:53:35.436977 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.436981 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.436985 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.436988 | orchestrator | 2026-04-04 00:53:35.436992 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.436996 | orchestrator | Saturday 04 April 2026 00:49:49 +0000 (0:00:00.693) 0:06:04.795 ******** 2026-04-04 00:53:35.437000 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437004 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437007 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437011 | orchestrator | 2026-04-04 00:53:35.437015 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.437018 | orchestrator | Saturday 04 April 2026 00:49:49 +0000 (0:00:00.667) 0:06:05.462 ******** 2026-04-04 00:53:35.437044 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437049 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437052 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437056 | orchestrator | 2026-04-04 00:53:35.437060 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.437064 | orchestrator | Saturday 04 April 2026 00:49:50 +0000 (0:00:00.437) 0:06:05.899 ******** 2026-04-04 00:53:35.437067 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437071 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437075 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437079 | orchestrator | 2026-04-04 00:53:35.437083 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.437087 | orchestrator | Saturday 04 April 2026 00:49:50 +0000 (0:00:00.285) 0:06:06.185 ******** 2026-04-04 00:53:35.437090 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437094 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437098 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437101 | orchestrator | 2026-04-04 00:53:35.437105 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.437109 | orchestrator | Saturday 04 April 2026 00:49:50 +0000 (0:00:00.269) 0:06:06.454 ******** 2026-04-04 00:53:35.437116 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437120 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437123 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437127 | orchestrator | 2026-04-04 00:53:35.437131 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.437134 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:00.259) 0:06:06.714 ******** 2026-04-04 00:53:35.437138 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437142 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437145 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437149 | orchestrator | 2026-04-04 00:53:35.437153 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.437157 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:00.438) 0:06:07.152 ******** 2026-04-04 00:53:35.437160 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437164 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437168 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437172 | orchestrator | 2026-04-04 00:53:35.437178 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.437182 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:00.258) 0:06:07.411 ******** 2026-04-04 00:53:35.437186 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437189 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437193 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437197 | orchestrator | 2026-04-04 00:53:35.437201 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.437204 | orchestrator | Saturday 04 April 2026 00:49:51 +0000 (0:00:00.261) 0:06:07.673 ******** 2026-04-04 00:53:35.437208 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437212 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437216 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437219 | orchestrator | 2026-04-04 00:53:35.437223 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.437227 | orchestrator | Saturday 04 April 2026 00:49:52 +0000 (0:00:00.249) 0:06:07.922 ******** 2026-04-04 00:53:35.437230 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437234 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437238 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437241 | orchestrator | 2026-04-04 00:53:35.437245 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.437249 | orchestrator | Saturday 04 April 2026 00:49:52 +0000 (0:00:00.450) 0:06:08.372 ******** 2026-04-04 00:53:35.437253 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437256 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437260 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437264 | orchestrator | 2026-04-04 00:53:35.437268 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-04 00:53:35.437271 | orchestrator | Saturday 04 April 2026 00:49:53 +0000 (0:00:00.519) 0:06:08.892 ******** 2026-04-04 00:53:35.437275 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437279 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437282 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437286 | orchestrator | 2026-04-04 00:53:35.437292 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-04 00:53:35.437296 | orchestrator | Saturday 04 April 2026 00:49:53 +0000 (0:00:00.262) 0:06:09.154 ******** 2026-04-04 00:53:35.437300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:53:35.437304 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:53:35.437307 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:53:35.437311 | orchestrator | 2026-04-04 00:53:35.437315 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-04 00:53:35.437318 | orchestrator | Saturday 04 April 2026 00:49:54 +0000 (0:00:00.705) 0:06:09.860 ******** 2026-04-04 00:53:35.437325 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.437329 | orchestrator | 2026-04-04 00:53:35.437332 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-04 00:53:35.437336 | orchestrator | Saturday 04 April 2026 00:49:54 +0000 (0:00:00.664) 0:06:10.524 ******** 2026-04-04 00:53:35.437340 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437344 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437347 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437351 | orchestrator | 2026-04-04 00:53:35.437355 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-04 00:53:35.437359 | orchestrator | Saturday 04 April 2026 00:49:55 +0000 (0:00:00.245) 0:06:10.770 ******** 2026-04-04 00:53:35.437362 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437366 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437370 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437374 | orchestrator | 2026-04-04 00:53:35.437377 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-04 00:53:35.437381 | orchestrator | Saturday 04 April 2026 00:49:55 +0000 (0:00:00.264) 0:06:11.034 ******** 2026-04-04 00:53:35.437385 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437388 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437392 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437396 | orchestrator | 2026-04-04 00:53:35.437400 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-04 00:53:35.437403 | orchestrator | Saturday 04 April 2026 00:49:56 +0000 (0:00:00.799) 0:06:11.833 ******** 2026-04-04 00:53:35.437407 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437411 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437414 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437418 | orchestrator | 2026-04-04 00:53:35.437422 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-04 00:53:35.437426 | orchestrator | Saturday 04 April 2026 00:49:56 +0000 (0:00:00.319) 0:06:12.153 ******** 2026-04-04 00:53:35.437429 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:53:35.437433 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:53:35.437437 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-04 00:53:35.437441 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:53:35.437444 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:53:35.437448 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:53:35.437452 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:53:35.437455 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-04 00:53:35.437464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:53:35.437468 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-04 00:53:35.437472 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:53:35.437476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:53:35.437479 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:53:35.437483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-04 00:53:35.437487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-04 00:53:35.437493 | orchestrator | 2026-04-04 00:53:35.437497 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-04 00:53:35.437501 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:04.202) 0:06:16.356 ******** 2026-04-04 00:53:35.437505 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437508 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437512 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437516 | orchestrator | 2026-04-04 00:53:35.437519 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-04 00:53:35.437523 | orchestrator | Saturday 04 April 2026 00:50:00 +0000 (0:00:00.261) 0:06:16.617 ******** 2026-04-04 00:53:35.437527 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.437531 | orchestrator | 2026-04-04 00:53:35.437534 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-04 00:53:35.437538 | orchestrator | Saturday 04 April 2026 00:50:01 +0000 (0:00:00.646) 0:06:17.264 ******** 2026-04-04 00:53:35.437544 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:53:35.437554 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:53:35.437560 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-04 00:53:35.437566 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-04 00:53:35.437573 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-04 00:53:35.437579 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-04 00:53:35.437586 | orchestrator | 2026-04-04 00:53:35.437592 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-04 00:53:35.437599 | orchestrator | Saturday 04 April 2026 00:50:02 +0000 (0:00:01.045) 0:06:18.309 ******** 2026-04-04 00:53:35.437605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.437612 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.437619 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.437625 | orchestrator | 2026-04-04 00:53:35.437631 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:53:35.437637 | orchestrator | Saturday 04 April 2026 00:50:04 +0000 (0:00:01.808) 0:06:20.118 ******** 2026-04-04 00:53:35.437644 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:53:35.437650 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.437657 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.437663 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:53:35.437669 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:53:35.437676 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.437683 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:53:35.437689 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:53:35.437694 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.437698 | orchestrator | 2026-04-04 00:53:35.437702 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-04 00:53:35.437705 | orchestrator | Saturday 04 April 2026 00:50:05 +0000 (0:00:01.300) 0:06:21.418 ******** 2026-04-04 00:53:35.437709 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.437713 | orchestrator | 2026-04-04 00:53:35.437718 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-04 00:53:35.437724 | orchestrator | Saturday 04 April 2026 00:50:07 +0000 (0:00:01.723) 0:06:23.141 ******** 2026-04-04 00:53:35.437731 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.437737 | orchestrator | 2026-04-04 00:53:35.437743 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-04 00:53:35.437749 | orchestrator | Saturday 04 April 2026 00:50:07 +0000 (0:00:00.468) 0:06:23.610 ******** 2026-04-04 00:53:35.437761 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f7bbb1d-c278-5154-a1d3-309d62b79a2f', 'data_vg': 'ceph-2f7bbb1d-c278-5154-a1d3-309d62b79a2f'}) 2026-04-04 00:53:35.437768 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-92575011-0645-5cdf-badf-43ad86ae8159', 'data_vg': 'ceph-92575011-0645-5cdf-badf-43ad86ae8159'}) 2026-04-04 00:53:35.437775 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f0c57fe1-7323-5f70-a575-22ad75776519', 'data_vg': 'ceph-f0c57fe1-7323-5f70-a575-22ad75776519'}) 2026-04-04 00:53:35.437782 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa', 'data_vg': 'ceph-b98f96ba-ddcd-5dd8-8e53-77fbcda444fa'}) 2026-04-04 00:53:35.437792 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-35995e13-d19e-546f-ae20-ff296f4077c7', 'data_vg': 'ceph-35995e13-d19e-546f-ae20-ff296f4077c7'}) 2026-04-04 00:53:35.437797 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e865913-a109-5f6b-9820-a5901c50a906', 'data_vg': 'ceph-1e865913-a109-5f6b-9820-a5901c50a906'}) 2026-04-04 00:53:35.437801 | orchestrator | 2026-04-04 00:53:35.437805 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-04 00:53:35.437809 | orchestrator | Saturday 04 April 2026 00:50:40 +0000 (0:00:32.117) 0:06:55.728 ******** 2026-04-04 00:53:35.437816 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.437822 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.437828 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.437834 | orchestrator | 2026-04-04 00:53:35.437840 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-04 00:53:35.437847 | orchestrator | Saturday 04 April 2026 00:50:40 +0000 (0:00:00.661) 0:06:56.390 ******** 2026-04-04 00:53:35.437853 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.437860 | orchestrator | 2026-04-04 00:53:35.437866 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-04 00:53:35.437872 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:00.516) 0:06:56.906 ******** 2026-04-04 00:53:35.437878 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437885 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437891 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437897 | orchestrator | 2026-04-04 00:53:35.437904 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-04 00:53:35.437910 | orchestrator | Saturday 04 April 2026 00:50:41 +0000 (0:00:00.703) 0:06:57.609 ******** 2026-04-04 00:53:35.437917 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.437923 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.437929 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.437936 | orchestrator | 2026-04-04 00:53:35.437942 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-04 00:53:35.437951 | orchestrator | Saturday 04 April 2026 00:50:43 +0000 (0:00:01.823) 0:06:59.433 ******** 2026-04-04 00:53:35.437958 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.437964 | orchestrator | 2026-04-04 00:53:35.437970 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-04 00:53:35.437976 | orchestrator | Saturday 04 April 2026 00:50:44 +0000 (0:00:00.593) 0:07:00.027 ******** 2026-04-04 00:53:35.437983 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.437989 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.437995 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.438001 | orchestrator | 2026-04-04 00:53:35.438007 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-04 00:53:35.438052 | orchestrator | Saturday 04 April 2026 00:50:45 +0000 (0:00:01.246) 0:07:01.274 ******** 2026-04-04 00:53:35.438062 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.438068 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.438080 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.438086 | orchestrator | 2026-04-04 00:53:35.438093 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-04 00:53:35.438100 | orchestrator | Saturday 04 April 2026 00:50:47 +0000 (0:00:01.439) 0:07:02.713 ******** 2026-04-04 00:53:35.438108 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.438114 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.438121 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.438128 | orchestrator | 2026-04-04 00:53:35.438135 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-04 00:53:35.438142 | orchestrator | Saturday 04 April 2026 00:50:48 +0000 (0:00:01.888) 0:07:04.601 ******** 2026-04-04 00:53:35.438148 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438155 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438162 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438169 | orchestrator | 2026-04-04 00:53:35.438175 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-04 00:53:35.438182 | orchestrator | Saturday 04 April 2026 00:50:49 +0000 (0:00:00.342) 0:07:04.944 ******** 2026-04-04 00:53:35.438190 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438196 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438203 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438210 | orchestrator | 2026-04-04 00:53:35.438217 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-04 00:53:35.438223 | orchestrator | Saturday 04 April 2026 00:50:49 +0000 (0:00:00.289) 0:07:05.234 ******** 2026-04-04 00:53:35.438230 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:53:35.438237 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-04-04 00:53:35.438244 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-04-04 00:53:35.438250 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-04-04 00:53:35.438257 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-04-04 00:53:35.438264 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-04-04 00:53:35.438271 | orchestrator | 2026-04-04 00:53:35.438278 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-04 00:53:35.438284 | orchestrator | Saturday 04 April 2026 00:50:50 +0000 (0:00:01.441) 0:07:06.675 ******** 2026-04-04 00:53:35.438291 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-04 00:53:35.438298 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-04 00:53:35.438305 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-04 00:53:35.438312 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-04 00:53:35.438318 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-04 00:53:35.438325 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-04 00:53:35.438332 | orchestrator | 2026-04-04 00:53:35.438338 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-04 00:53:35.438345 | orchestrator | Saturday 04 April 2026 00:50:53 +0000 (0:00:02.135) 0:07:08.811 ******** 2026-04-04 00:53:35.438352 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-04-04 00:53:35.438360 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-04-04 00:53:35.438371 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-04-04 00:53:35.438378 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-04-04 00:53:35.438385 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-04-04 00:53:35.438392 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-04-04 00:53:35.438399 | orchestrator | 2026-04-04 00:53:35.438406 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-04 00:53:35.438413 | orchestrator | Saturday 04 April 2026 00:50:57 +0000 (0:00:04.115) 0:07:12.926 ******** 2026-04-04 00:53:35.438420 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438426 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438433 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.438440 | orchestrator | 2026-04-04 00:53:35.438454 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-04 00:53:35.438461 | orchestrator | Saturday 04 April 2026 00:50:59 +0000 (0:00:02.175) 0:07:15.102 ******** 2026-04-04 00:53:35.438468 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438474 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438481 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-04 00:53:35.438488 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.438495 | orchestrator | 2026-04-04 00:53:35.438502 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-04 00:53:35.438509 | orchestrator | Saturday 04 April 2026 00:51:12 +0000 (0:00:12.801) 0:07:27.904 ******** 2026-04-04 00:53:35.438516 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438523 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438530 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438536 | orchestrator | 2026-04-04 00:53:35.438544 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.438552 | orchestrator | Saturday 04 April 2026 00:51:12 +0000 (0:00:00.729) 0:07:28.633 ******** 2026-04-04 00:53:35.438563 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438570 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438577 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438584 | orchestrator | 2026-04-04 00:53:35.438591 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-04 00:53:35.438598 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:00.476) 0:07:29.110 ******** 2026-04-04 00:53:35.438605 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.438612 | orchestrator | 2026-04-04 00:53:35.438619 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-04 00:53:35.438626 | orchestrator | Saturday 04 April 2026 00:51:13 +0000 (0:00:00.437) 0:07:29.548 ******** 2026-04-04 00:53:35.438633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.438640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.438647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.438654 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438661 | orchestrator | 2026-04-04 00:53:35.438668 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-04 00:53:35.438675 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.348) 0:07:29.897 ******** 2026-04-04 00:53:35.438682 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438689 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438696 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438703 | orchestrator | 2026-04-04 00:53:35.438710 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-04 00:53:35.438717 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.254) 0:07:30.152 ******** 2026-04-04 00:53:35.438724 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438731 | orchestrator | 2026-04-04 00:53:35.438738 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-04 00:53:35.438746 | orchestrator | Saturday 04 April 2026 00:51:14 +0000 (0:00:00.194) 0:07:30.346 ******** 2026-04-04 00:53:35.438753 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438760 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.438767 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.438774 | orchestrator | 2026-04-04 00:53:35.438781 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-04 00:53:35.438788 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.438) 0:07:30.784 ******** 2026-04-04 00:53:35.438795 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438802 | orchestrator | 2026-04-04 00:53:35.438809 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-04 00:53:35.438820 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.193) 0:07:30.977 ******** 2026-04-04 00:53:35.438827 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438834 | orchestrator | 2026-04-04 00:53:35.438841 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-04 00:53:35.438848 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.205) 0:07:31.183 ******** 2026-04-04 00:53:35.438855 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438862 | orchestrator | 2026-04-04 00:53:35.438869 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-04 00:53:35.438876 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.115) 0:07:31.299 ******** 2026-04-04 00:53:35.438883 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438890 | orchestrator | 2026-04-04 00:53:35.438897 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-04 00:53:35.438904 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.191) 0:07:31.490 ******** 2026-04-04 00:53:35.438911 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438918 | orchestrator | 2026-04-04 00:53:35.438925 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-04 00:53:35.438932 | orchestrator | Saturday 04 April 2026 00:51:15 +0000 (0:00:00.194) 0:07:31.684 ******** 2026-04-04 00:53:35.438943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.438950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.438957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.438964 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438971 | orchestrator | 2026-04-04 00:53:35.438978 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-04 00:53:35.438985 | orchestrator | Saturday 04 April 2026 00:51:16 +0000 (0:00:00.330) 0:07:32.015 ******** 2026-04-04 00:53:35.438992 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.438999 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439006 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439013 | orchestrator | 2026-04-04 00:53:35.439020 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-04 00:53:35.439040 | orchestrator | Saturday 04 April 2026 00:51:16 +0000 (0:00:00.266) 0:07:32.281 ******** 2026-04-04 00:53:35.439046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439052 | orchestrator | 2026-04-04 00:53:35.439059 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-04 00:53:35.439065 | orchestrator | Saturday 04 April 2026 00:51:17 +0000 (0:00:00.569) 0:07:32.851 ******** 2026-04-04 00:53:35.439071 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439078 | orchestrator | 2026-04-04 00:53:35.439084 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-04 00:53:35.439091 | orchestrator | 2026-04-04 00:53:35.439097 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.439103 | orchestrator | Saturday 04 April 2026 00:51:17 +0000 (0:00:00.564) 0:07:33.415 ******** 2026-04-04 00:53:35.439110 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.439118 | orchestrator | 2026-04-04 00:53:35.439128 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.439134 | orchestrator | Saturday 04 April 2026 00:51:18 +0000 (0:00:00.857) 0:07:34.273 ******** 2026-04-04 00:53:35.439141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.439147 | orchestrator | 2026-04-04 00:53:35.439154 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.439160 | orchestrator | Saturday 04 April 2026 00:51:19 +0000 (0:00:01.009) 0:07:35.282 ******** 2026-04-04 00:53:35.439169 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439176 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439182 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439189 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.439195 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.439201 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.439208 | orchestrator | 2026-04-04 00:53:35.439215 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.439221 | orchestrator | Saturday 04 April 2026 00:51:20 +0000 (0:00:01.231) 0:07:36.514 ******** 2026-04-04 00:53:35.439228 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439235 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439242 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439247 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439254 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439260 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439267 | orchestrator | 2026-04-04 00:53:35.439274 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.439280 | orchestrator | Saturday 04 April 2026 00:51:21 +0000 (0:00:00.671) 0:07:37.185 ******** 2026-04-04 00:53:35.439287 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439293 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439299 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439306 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439312 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439318 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439325 | orchestrator | 2026-04-04 00:53:35.439331 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.439338 | orchestrator | Saturday 04 April 2026 00:51:22 +0000 (0:00:00.853) 0:07:38.038 ******** 2026-04-04 00:53:35.439344 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439350 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439356 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439363 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439369 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439375 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439381 | orchestrator | 2026-04-04 00:53:35.439387 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.439393 | orchestrator | Saturday 04 April 2026 00:51:23 +0000 (0:00:00.684) 0:07:38.723 ******** 2026-04-04 00:53:35.439400 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439406 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439412 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439419 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.439425 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.439431 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.439437 | orchestrator | 2026-04-04 00:53:35.439444 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.439451 | orchestrator | Saturday 04 April 2026 00:51:23 +0000 (0:00:00.919) 0:07:39.642 ******** 2026-04-04 00:53:35.439457 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439464 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439470 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439476 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439482 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439488 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439495 | orchestrator | 2026-04-04 00:53:35.439501 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.439508 | orchestrator | Saturday 04 April 2026 00:51:24 +0000 (0:00:00.680) 0:07:40.323 ******** 2026-04-04 00:53:35.439514 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439525 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439532 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439564 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439570 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439577 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439583 | orchestrator | 2026-04-04 00:53:35.439589 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.439596 | orchestrator | Saturday 04 April 2026 00:51:25 +0000 (0:00:00.500) 0:07:40.823 ******** 2026-04-04 00:53:35.439602 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439609 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439615 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439622 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.439628 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.439634 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.439641 | orchestrator | 2026-04-04 00:53:35.439647 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.439653 | orchestrator | Saturday 04 April 2026 00:51:26 +0000 (0:00:01.091) 0:07:41.915 ******** 2026-04-04 00:53:35.439660 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439666 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439673 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439679 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.439685 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.439691 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.439698 | orchestrator | 2026-04-04 00:53:35.439704 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.439710 | orchestrator | Saturday 04 April 2026 00:51:27 +0000 (0:00:00.919) 0:07:42.834 ******** 2026-04-04 00:53:35.439717 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439723 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439730 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439736 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439742 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439749 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439755 | orchestrator | 2026-04-04 00:53:35.439762 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.439771 | orchestrator | Saturday 04 April 2026 00:51:27 +0000 (0:00:00.655) 0:07:43.489 ******** 2026-04-04 00:53:35.439778 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.439784 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.439791 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.439797 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.439803 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.439810 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.439816 | orchestrator | 2026-04-04 00:53:35.439822 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.439829 | orchestrator | Saturday 04 April 2026 00:51:28 +0000 (0:00:00.505) 0:07:43.995 ******** 2026-04-04 00:53:35.439836 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439842 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439848 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439855 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439861 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439868 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439874 | orchestrator | 2026-04-04 00:53:35.439880 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.439887 | orchestrator | Saturday 04 April 2026 00:51:28 +0000 (0:00:00.665) 0:07:44.660 ******** 2026-04-04 00:53:35.439893 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439900 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439906 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439912 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439919 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439925 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.439931 | orchestrator | 2026-04-04 00:53:35.439937 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.439957 | orchestrator | Saturday 04 April 2026 00:51:29 +0000 (0:00:00.488) 0:07:45.149 ******** 2026-04-04 00:53:35.439963 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.439970 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.439976 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.439982 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.439989 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.439995 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.440001 | orchestrator | 2026-04-04 00:53:35.440008 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.440014 | orchestrator | Saturday 04 April 2026 00:51:30 +0000 (0:00:00.760) 0:07:45.909 ******** 2026-04-04 00:53:35.440021 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440055 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440062 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440068 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.440074 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.440081 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.440087 | orchestrator | 2026-04-04 00:53:35.440093 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.440100 | orchestrator | Saturday 04 April 2026 00:51:30 +0000 (0:00:00.578) 0:07:46.488 ******** 2026-04-04 00:53:35.440106 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440112 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440118 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440125 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:53:35.440132 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:53:35.440138 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:53:35.440145 | orchestrator | 2026-04-04 00:53:35.440151 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.440157 | orchestrator | Saturday 04 April 2026 00:51:31 +0000 (0:00:00.683) 0:07:47.172 ******** 2026-04-04 00:53:35.440164 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440170 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440177 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440183 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440190 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.440197 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.440203 | orchestrator | 2026-04-04 00:53:35.440210 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.440221 | orchestrator | Saturday 04 April 2026 00:51:32 +0000 (0:00:00.536) 0:07:47.708 ******** 2026-04-04 00:53:35.440228 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440234 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440240 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440246 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440252 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.440258 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.440264 | orchestrator | 2026-04-04 00:53:35.440270 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.440277 | orchestrator | Saturday 04 April 2026 00:51:32 +0000 (0:00:00.687) 0:07:48.396 ******** 2026-04-04 00:53:35.440283 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440290 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440296 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440302 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440308 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.440315 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.440321 | orchestrator | 2026-04-04 00:53:35.440327 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-04 00:53:35.440333 | orchestrator | Saturday 04 April 2026 00:51:33 +0000 (0:00:01.035) 0:07:49.431 ******** 2026-04-04 00:53:35.440339 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.440346 | orchestrator | 2026-04-04 00:53:35.440357 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-04 00:53:35.440363 | orchestrator | Saturday 04 April 2026 00:51:37 +0000 (0:00:04.067) 0:07:53.499 ******** 2026-04-04 00:53:35.440369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.440375 | orchestrator | 2026-04-04 00:53:35.440382 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-04 00:53:35.440388 | orchestrator | Saturday 04 April 2026 00:51:39 +0000 (0:00:01.613) 0:07:55.112 ******** 2026-04-04 00:53:35.440394 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.440401 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.440407 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.440413 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440419 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.440429 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.440435 | orchestrator | 2026-04-04 00:53:35.440441 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-04 00:53:35.440447 | orchestrator | Saturday 04 April 2026 00:51:40 +0000 (0:00:01.502) 0:07:56.615 ******** 2026-04-04 00:53:35.440454 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.440460 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.440466 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.440472 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.440478 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.440484 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.440490 | orchestrator | 2026-04-04 00:53:35.440496 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-04 00:53:35.440501 | orchestrator | Saturday 04 April 2026 00:51:42 +0000 (0:00:01.089) 0:07:57.704 ******** 2026-04-04 00:53:35.440507 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.440514 | orchestrator | 2026-04-04 00:53:35.440521 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-04 00:53:35.440527 | orchestrator | Saturday 04 April 2026 00:51:43 +0000 (0:00:01.019) 0:07:58.724 ******** 2026-04-04 00:53:35.440534 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.440537 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.440541 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.440545 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.440549 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.440552 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.440556 | orchestrator | 2026-04-04 00:53:35.440559 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-04 00:53:35.440563 | orchestrator | Saturday 04 April 2026 00:51:44 +0000 (0:00:01.586) 0:08:00.310 ******** 2026-04-04 00:53:35.440567 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.440571 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.440574 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.440578 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.440582 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.440585 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.440589 | orchestrator | 2026-04-04 00:53:35.440593 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-04 00:53:35.440596 | orchestrator | Saturday 04 April 2026 00:51:48 +0000 (0:00:03.850) 0:08:04.161 ******** 2026-04-04 00:53:35.440600 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:53:35.440604 | orchestrator | 2026-04-04 00:53:35.440608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-04 00:53:35.440612 | orchestrator | Saturday 04 April 2026 00:51:49 +0000 (0:00:01.031) 0:08:05.193 ******** 2026-04-04 00:53:35.440615 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440622 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440626 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440630 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440634 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.440637 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.440641 | orchestrator | 2026-04-04 00:53:35.440645 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-04 00:53:35.440649 | orchestrator | Saturday 04 April 2026 00:51:50 +0000 (0:00:00.545) 0:08:05.738 ******** 2026-04-04 00:53:35.440652 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.440656 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.440660 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.440664 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:53:35.440667 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:53:35.440671 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:53:35.440675 | orchestrator | 2026-04-04 00:53:35.440678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-04 00:53:35.440685 | orchestrator | Saturday 04 April 2026 00:51:52 +0000 (0:00:02.369) 0:08:08.108 ******** 2026-04-04 00:53:35.440689 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440693 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440697 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440700 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:53:35.440704 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:53:35.440708 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:53:35.440711 | orchestrator | 2026-04-04 00:53:35.440715 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-04 00:53:35.440719 | orchestrator | 2026-04-04 00:53:35.440722 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.440726 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.748) 0:08:08.857 ******** 2026-04-04 00:53:35.440730 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.440734 | orchestrator | 2026-04-04 00:53:35.440738 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.440741 | orchestrator | Saturday 04 April 2026 00:51:53 +0000 (0:00:00.612) 0:08:09.469 ******** 2026-04-04 00:53:35.440745 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.440749 | orchestrator | 2026-04-04 00:53:35.440753 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.440756 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.451) 0:08:09.921 ******** 2026-04-04 00:53:35.440760 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440764 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440768 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440771 | orchestrator | 2026-04-04 00:53:35.440775 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.440779 | orchestrator | Saturday 04 April 2026 00:51:54 +0000 (0:00:00.422) 0:08:10.344 ******** 2026-04-04 00:53:35.440783 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440788 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440792 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440796 | orchestrator | 2026-04-04 00:53:35.440800 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.440803 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.659) 0:08:11.004 ******** 2026-04-04 00:53:35.440807 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440811 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440815 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440829 | orchestrator | 2026-04-04 00:53:35.440833 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.440837 | orchestrator | Saturday 04 April 2026 00:51:55 +0000 (0:00:00.678) 0:08:11.682 ******** 2026-04-04 00:53:35.440844 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440848 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440851 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440855 | orchestrator | 2026-04-04 00:53:35.440859 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.440863 | orchestrator | Saturday 04 April 2026 00:51:56 +0000 (0:00:00.654) 0:08:12.337 ******** 2026-04-04 00:53:35.440866 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440870 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440874 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440878 | orchestrator | 2026-04-04 00:53:35.440882 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.440885 | orchestrator | Saturday 04 April 2026 00:51:57 +0000 (0:00:00.442) 0:08:12.779 ******** 2026-04-04 00:53:35.440889 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440893 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440897 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440900 | orchestrator | 2026-04-04 00:53:35.440904 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.440908 | orchestrator | Saturday 04 April 2026 00:51:57 +0000 (0:00:00.257) 0:08:13.037 ******** 2026-04-04 00:53:35.440912 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440915 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440919 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440923 | orchestrator | 2026-04-04 00:53:35.440926 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.440930 | orchestrator | Saturday 04 April 2026 00:51:57 +0000 (0:00:00.255) 0:08:13.292 ******** 2026-04-04 00:53:35.440934 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440938 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440942 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440945 | orchestrator | 2026-04-04 00:53:35.440949 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.440953 | orchestrator | Saturday 04 April 2026 00:51:58 +0000 (0:00:00.695) 0:08:13.988 ******** 2026-04-04 00:53:35.440957 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.440960 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.440964 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.440968 | orchestrator | 2026-04-04 00:53:35.440972 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.440976 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.900) 0:08:14.888 ******** 2026-04-04 00:53:35.440979 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.440983 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.440987 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.440991 | orchestrator | 2026-04-04 00:53:35.440994 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.440998 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.272) 0:08:15.161 ******** 2026-04-04 00:53:35.441002 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441006 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441009 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441013 | orchestrator | 2026-04-04 00:53:35.441017 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.441021 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.304) 0:08:15.466 ******** 2026-04-04 00:53:35.441033 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441037 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441043 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441047 | orchestrator | 2026-04-04 00:53:35.441051 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.441054 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.269) 0:08:15.735 ******** 2026-04-04 00:53:35.441058 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441062 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441068 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441072 | orchestrator | 2026-04-04 00:53:35.441076 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.441080 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.463) 0:08:16.199 ******** 2026-04-04 00:53:35.441083 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441087 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441091 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441095 | orchestrator | 2026-04-04 00:53:35.441098 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.441102 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.275) 0:08:16.474 ******** 2026-04-04 00:53:35.441106 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441109 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441113 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441117 | orchestrator | 2026-04-04 00:53:35.441121 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.441124 | orchestrator | Saturday 04 April 2026 00:52:01 +0000 (0:00:00.283) 0:08:16.758 ******** 2026-04-04 00:53:35.441128 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441132 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441136 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441139 | orchestrator | 2026-04-04 00:53:35.441143 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.441147 | orchestrator | Saturday 04 April 2026 00:52:01 +0000 (0:00:00.255) 0:08:17.013 ******** 2026-04-04 00:53:35.441151 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441154 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441158 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441162 | orchestrator | 2026-04-04 00:53:35.441168 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.441172 | orchestrator | Saturday 04 April 2026 00:52:01 +0000 (0:00:00.413) 0:08:17.426 ******** 2026-04-04 00:53:35.441175 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441179 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441183 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441187 | orchestrator | 2026-04-04 00:53:35.441190 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.441194 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:00.319) 0:08:17.745 ******** 2026-04-04 00:53:35.441198 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441202 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441205 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441209 | orchestrator | 2026-04-04 00:53:35.441213 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-04 00:53:35.441217 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:00.488) 0:08:18.234 ******** 2026-04-04 00:53:35.441220 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441224 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441228 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-04 00:53:35.441232 | orchestrator | 2026-04-04 00:53:35.441235 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-04 00:53:35.441239 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:00.636) 0:08:18.870 ******** 2026-04-04 00:53:35.441243 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.441247 | orchestrator | 2026-04-04 00:53:35.441251 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-04 00:53:35.441254 | orchestrator | Saturday 04 April 2026 00:52:04 +0000 (0:00:01.667) 0:08:20.538 ******** 2026-04-04 00:53:35.441260 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-04 00:53:35.441270 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441274 | orchestrator | 2026-04-04 00:53:35.441277 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-04 00:53:35.441281 | orchestrator | Saturday 04 April 2026 00:52:05 +0000 (0:00:00.176) 0:08:20.715 ******** 2026-04-04 00:53:35.441287 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:53:35.441295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:53:35.441299 | orchestrator | 2026-04-04 00:53:35.441302 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-04 00:53:35.441306 | orchestrator | Saturday 04 April 2026 00:52:11 +0000 (0:00:06.547) 0:08:27.262 ******** 2026-04-04 00:53:35.441310 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 00:53:35.441314 | orchestrator | 2026-04-04 00:53:35.441318 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-04 00:53:35.441321 | orchestrator | Saturday 04 April 2026 00:52:14 +0000 (0:00:02.597) 0:08:29.860 ******** 2026-04-04 00:53:35.441327 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.441331 | orchestrator | 2026-04-04 00:53:35.441335 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-04 00:53:35.441339 | orchestrator | Saturday 04 April 2026 00:52:14 +0000 (0:00:00.812) 0:08:30.673 ******** 2026-04-04 00:53:35.441343 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:53:35.441346 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:53:35.441350 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-04 00:53:35.441354 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-04 00:53:35.441358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-04 00:53:35.441361 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-04 00:53:35.441365 | orchestrator | 2026-04-04 00:53:35.441369 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-04 00:53:35.441373 | orchestrator | Saturday 04 April 2026 00:52:16 +0000 (0:00:01.168) 0:08:31.841 ******** 2026-04-04 00:53:35.441376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.441380 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.441384 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.441388 | orchestrator | 2026-04-04 00:53:35.441391 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:53:35.441395 | orchestrator | Saturday 04 April 2026 00:52:17 +0000 (0:00:01.728) 0:08:33.570 ******** 2026-04-04 00:53:35.441399 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:53:35.441403 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.441406 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441410 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:53:35.441423 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:53:35.441427 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441431 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:53:35.441435 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:53:35.441439 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441442 | orchestrator | 2026-04-04 00:53:35.441449 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-04 00:53:35.441453 | orchestrator | Saturday 04 April 2026 00:52:19 +0000 (0:00:01.230) 0:08:34.800 ******** 2026-04-04 00:53:35.441456 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441460 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441464 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441468 | orchestrator | 2026-04-04 00:53:35.441471 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-04 00:53:35.441475 | orchestrator | Saturday 04 April 2026 00:52:21 +0000 (0:00:02.198) 0:08:36.999 ******** 2026-04-04 00:53:35.441479 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441483 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441486 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441490 | orchestrator | 2026-04-04 00:53:35.441494 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-04 00:53:35.441498 | orchestrator | Saturday 04 April 2026 00:52:21 +0000 (0:00:00.448) 0:08:37.447 ******** 2026-04-04 00:53:35.441501 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.441505 | orchestrator | 2026-04-04 00:53:35.441509 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-04 00:53:35.441513 | orchestrator | Saturday 04 April 2026 00:52:22 +0000 (0:00:00.371) 0:08:37.818 ******** 2026-04-04 00:53:35.441517 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-3, testbed-node-5 2026-04-04 00:53:35.441520 | orchestrator | 2026-04-04 00:53:35.441524 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-04 00:53:35.441528 | orchestrator | Saturday 04 April 2026 00:52:22 +0000 (0:00:00.548) 0:08:38.366 ******** 2026-04-04 00:53:35.441532 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441535 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441539 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441543 | orchestrator | 2026-04-04 00:53:35.441547 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-04 00:53:35.441550 | orchestrator | Saturday 04 April 2026 00:52:23 +0000 (0:00:01.245) 0:08:39.612 ******** 2026-04-04 00:53:35.441554 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441558 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441562 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441565 | orchestrator | 2026-04-04 00:53:35.441569 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-04 00:53:35.441573 | orchestrator | Saturday 04 April 2026 00:52:25 +0000 (0:00:01.223) 0:08:40.836 ******** 2026-04-04 00:53:35.441576 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441580 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441584 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441588 | orchestrator | 2026-04-04 00:53:35.441591 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-04 00:53:35.441595 | orchestrator | Saturday 04 April 2026 00:52:27 +0000 (0:00:02.063) 0:08:42.899 ******** 2026-04-04 00:53:35.441599 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441603 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441606 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441610 | orchestrator | 2026-04-04 00:53:35.441614 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-04 00:53:35.441618 | orchestrator | Saturday 04 April 2026 00:52:29 +0000 (0:00:02.037) 0:08:44.937 ******** 2026-04-04 00:53:35.441621 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441625 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441629 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441633 | orchestrator | 2026-04-04 00:53:35.441639 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.441643 | orchestrator | Saturday 04 April 2026 00:52:30 +0000 (0:00:01.046) 0:08:45.984 ******** 2026-04-04 00:53:35.441650 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441654 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441658 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441662 | orchestrator | 2026-04-04 00:53:35.441665 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-04 00:53:35.441669 | orchestrator | Saturday 04 April 2026 00:52:31 +0000 (0:00:00.772) 0:08:46.756 ******** 2026-04-04 00:53:35.441673 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.441677 | orchestrator | 2026-04-04 00:53:35.441680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-04 00:53:35.441684 | orchestrator | Saturday 04 April 2026 00:52:31 +0000 (0:00:00.501) 0:08:47.258 ******** 2026-04-04 00:53:35.441688 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441692 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441701 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441705 | orchestrator | 2026-04-04 00:53:35.441709 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-04 00:53:35.441713 | orchestrator | Saturday 04 April 2026 00:52:31 +0000 (0:00:00.300) 0:08:47.558 ******** 2026-04-04 00:53:35.441716 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.441720 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.441724 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.441728 | orchestrator | 2026-04-04 00:53:35.441731 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-04 00:53:35.441735 | orchestrator | Saturday 04 April 2026 00:52:33 +0000 (0:00:01.318) 0:08:48.877 ******** 2026-04-04 00:53:35.441739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.441743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.441748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.441752 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441756 | orchestrator | 2026-04-04 00:53:35.441760 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-04 00:53:35.441764 | orchestrator | Saturday 04 April 2026 00:52:33 +0000 (0:00:00.702) 0:08:49.580 ******** 2026-04-04 00:53:35.441767 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441771 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441775 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441779 | orchestrator | 2026-04-04 00:53:35.441783 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-04 00:53:35.441786 | orchestrator | 2026-04-04 00:53:35.441790 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-04 00:53:35.441794 | orchestrator | Saturday 04 April 2026 00:52:34 +0000 (0:00:00.470) 0:08:50.050 ******** 2026-04-04 00:53:35.441798 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.441802 | orchestrator | 2026-04-04 00:53:35.441805 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-04 00:53:35.441809 | orchestrator | Saturday 04 April 2026 00:52:34 +0000 (0:00:00.596) 0:08:50.646 ******** 2026-04-04 00:53:35.441813 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.441817 | orchestrator | 2026-04-04 00:53:35.441820 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-04 00:53:35.441824 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.464) 0:08:51.110 ******** 2026-04-04 00:53:35.441828 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441832 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441836 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441839 | orchestrator | 2026-04-04 00:53:35.441843 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-04 00:53:35.441850 | orchestrator | Saturday 04 April 2026 00:52:35 +0000 (0:00:00.417) 0:08:51.528 ******** 2026-04-04 00:53:35.441854 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441858 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441861 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441865 | orchestrator | 2026-04-04 00:53:35.441869 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-04 00:53:35.441873 | orchestrator | Saturday 04 April 2026 00:52:36 +0000 (0:00:00.647) 0:08:52.176 ******** 2026-04-04 00:53:35.441877 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441880 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441884 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441888 | orchestrator | 2026-04-04 00:53:35.441892 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-04 00:53:35.441895 | orchestrator | Saturday 04 April 2026 00:52:37 +0000 (0:00:00.744) 0:08:52.920 ******** 2026-04-04 00:53:35.441899 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441903 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.441907 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.441910 | orchestrator | 2026-04-04 00:53:35.441914 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-04 00:53:35.441918 | orchestrator | Saturday 04 April 2026 00:52:37 +0000 (0:00:00.672) 0:08:53.594 ******** 2026-04-04 00:53:35.441922 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441926 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441929 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441933 | orchestrator | 2026-04-04 00:53:35.441937 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-04 00:53:35.441941 | orchestrator | Saturday 04 April 2026 00:52:38 +0000 (0:00:00.510) 0:08:54.104 ******** 2026-04-04 00:53:35.441944 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441948 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441952 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441956 | orchestrator | 2026-04-04 00:53:35.441959 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-04 00:53:35.441966 | orchestrator | Saturday 04 April 2026 00:52:38 +0000 (0:00:00.305) 0:08:54.410 ******** 2026-04-04 00:53:35.441970 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.441974 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.441978 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.441981 | orchestrator | 2026-04-04 00:53:35.441985 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-04 00:53:35.441989 | orchestrator | Saturday 04 April 2026 00:52:38 +0000 (0:00:00.263) 0:08:54.673 ******** 2026-04-04 00:53:35.441993 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.441996 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442000 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442004 | orchestrator | 2026-04-04 00:53:35.442008 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-04 00:53:35.442041 | orchestrator | Saturday 04 April 2026 00:52:39 +0000 (0:00:00.686) 0:08:55.359 ******** 2026-04-04 00:53:35.442046 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442050 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442054 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442058 | orchestrator | 2026-04-04 00:53:35.442061 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-04 00:53:35.442065 | orchestrator | Saturday 04 April 2026 00:52:40 +0000 (0:00:00.907) 0:08:56.267 ******** 2026-04-04 00:53:35.442069 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442073 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442077 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442080 | orchestrator | 2026-04-04 00:53:35.442084 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-04 00:53:35.442088 | orchestrator | Saturday 04 April 2026 00:52:40 +0000 (0:00:00.259) 0:08:56.527 ******** 2026-04-04 00:53:35.442095 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442098 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442102 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442106 | orchestrator | 2026-04-04 00:53:35.442110 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-04 00:53:35.442114 | orchestrator | Saturday 04 April 2026 00:52:41 +0000 (0:00:00.257) 0:08:56.784 ******** 2026-04-04 00:53:35.442118 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442121 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442125 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442129 | orchestrator | 2026-04-04 00:53:35.442133 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-04 00:53:35.442137 | orchestrator | Saturday 04 April 2026 00:52:41 +0000 (0:00:00.285) 0:08:57.069 ******** 2026-04-04 00:53:35.442141 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442144 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442148 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442152 | orchestrator | 2026-04-04 00:53:35.442156 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-04 00:53:35.442160 | orchestrator | Saturday 04 April 2026 00:52:41 +0000 (0:00:00.411) 0:08:57.481 ******** 2026-04-04 00:53:35.442163 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442167 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442171 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442174 | orchestrator | 2026-04-04 00:53:35.442178 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-04 00:53:35.442182 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:00.294) 0:08:57.775 ******** 2026-04-04 00:53:35.442186 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442190 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442194 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442197 | orchestrator | 2026-04-04 00:53:35.442217 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-04 00:53:35.442221 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:00.267) 0:08:58.043 ******** 2026-04-04 00:53:35.442225 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442228 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442232 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442236 | orchestrator | 2026-04-04 00:53:35.442240 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-04 00:53:35.442244 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:00.312) 0:08:58.355 ******** 2026-04-04 00:53:35.442247 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442251 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442255 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442259 | orchestrator | 2026-04-04 00:53:35.442262 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-04 00:53:35.442266 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.436) 0:08:58.792 ******** 2026-04-04 00:53:35.442270 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442274 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442277 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442281 | orchestrator | 2026-04-04 00:53:35.442285 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-04 00:53:35.442289 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.285) 0:08:59.077 ******** 2026-04-04 00:53:35.442292 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442296 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442300 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442304 | orchestrator | 2026-04-04 00:53:35.442308 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-04 00:53:35.442311 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.475) 0:08:59.552 ******** 2026-04-04 00:53:35.442315 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.442321 | orchestrator | 2026-04-04 00:53:35.442325 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-04 00:53:35.442329 | orchestrator | Saturday 04 April 2026 00:52:44 +0000 (0:00:00.584) 0:09:00.137 ******** 2026-04-04 00:53:35.442332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442336 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.442340 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.442344 | orchestrator | 2026-04-04 00:53:35.442351 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:53:35.442355 | orchestrator | Saturday 04 April 2026 00:52:46 +0000 (0:00:01.679) 0:09:01.817 ******** 2026-04-04 00:53:35.442359 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:53:35.442363 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-04 00:53:35.442367 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.442370 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:53:35.442374 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-04 00:53:35.442378 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.442382 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:53:35.442385 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-04 00:53:35.442389 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.442393 | orchestrator | 2026-04-04 00:53:35.442396 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-04 00:53:35.442400 | orchestrator | Saturday 04 April 2026 00:52:47 +0000 (0:00:01.191) 0:09:03.009 ******** 2026-04-04 00:53:35.442404 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442408 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442411 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442415 | orchestrator | 2026-04-04 00:53:35.442419 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-04 00:53:35.442423 | orchestrator | Saturday 04 April 2026 00:52:47 +0000 (0:00:00.268) 0:09:03.277 ******** 2026-04-04 00:53:35.442427 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.442430 | orchestrator | 2026-04-04 00:53:35.442434 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-04 00:53:35.442438 | orchestrator | Saturday 04 April 2026 00:52:48 +0000 (0:00:00.631) 0:09:03.909 ******** 2026-04-04 00:53:35.442444 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442448 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442455 | orchestrator | 2026-04-04 00:53:35.442459 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-04 00:53:35.442463 | orchestrator | Saturday 04 April 2026 00:52:49 +0000 (0:00:00.870) 0:09:04.780 ******** 2026-04-04 00:53:35.442467 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442471 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:53:35.442474 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442478 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:53:35.442482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442488 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-04 00:53:35.442493 | orchestrator | 2026-04-04 00:53:35.442499 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-04 00:53:35.442506 | orchestrator | Saturday 04 April 2026 00:52:52 +0000 (0:00:03.637) 0:09:08.417 ******** 2026-04-04 00:53:35.442512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442519 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.442526 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442532 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.442539 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:53:35.442546 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:53:35.442552 | orchestrator | 2026-04-04 00:53:35.442558 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-04 00:53:35.442565 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:02.421) 0:09:10.838 ******** 2026-04-04 00:53:35.442572 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 00:53:35.442580 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.442587 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 00:53:35.442594 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.442600 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 00:53:35.442607 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.442614 | orchestrator | 2026-04-04 00:53:35.442621 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-04 00:53:35.442628 | orchestrator | Saturday 04 April 2026 00:52:56 +0000 (0:00:01.217) 0:09:12.056 ******** 2026-04-04 00:53:35.442635 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-04 00:53:35.442642 | orchestrator | 2026-04-04 00:53:35.442649 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-04 00:53:35.442656 | orchestrator | Saturday 04 April 2026 00:52:56 +0000 (0:00:00.197) 0:09:12.253 ******** 2026-04-04 00:53:35.442666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442685 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442689 | orchestrator | 2026-04-04 00:53:35.442693 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-04 00:53:35.442696 | orchestrator | Saturday 04 April 2026 00:52:57 +0000 (0:00:00.525) 0:09:12.779 ******** 2026-04-04 00:53:35.442700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-04 00:53:35.442726 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442729 | orchestrator | 2026-04-04 00:53:35.442733 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-04 00:53:35.442737 | orchestrator | Saturday 04 April 2026 00:52:57 +0000 (0:00:00.522) 0:09:13.301 ******** 2026-04-04 00:53:35.442740 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:53:35.442745 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:53:35.442748 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:53:35.442752 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:53:35.442756 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-04 00:53:35.442760 | orchestrator | 2026-04-04 00:53:35.442763 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-04 00:53:35.442767 | orchestrator | Saturday 04 April 2026 00:53:20 +0000 (0:00:22.385) 0:09:35.687 ******** 2026-04-04 00:53:35.442771 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442774 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442778 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442782 | orchestrator | 2026-04-04 00:53:35.442786 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-04 00:53:35.442789 | orchestrator | Saturday 04 April 2026 00:53:20 +0000 (0:00:00.404) 0:09:36.092 ******** 2026-04-04 00:53:35.442793 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442797 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442800 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442804 | orchestrator | 2026-04-04 00:53:35.442808 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-04 00:53:35.442812 | orchestrator | Saturday 04 April 2026 00:53:21 +0000 (0:00:00.599) 0:09:36.692 ******** 2026-04-04 00:53:35.442815 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.442819 | orchestrator | 2026-04-04 00:53:35.442823 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-04 00:53:35.442826 | orchestrator | Saturday 04 April 2026 00:53:21 +0000 (0:00:00.536) 0:09:37.228 ******** 2026-04-04 00:53:35.442830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.442834 | orchestrator | 2026-04-04 00:53:35.442837 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-04 00:53:35.442841 | orchestrator | Saturday 04 April 2026 00:53:22 +0000 (0:00:00.692) 0:09:37.921 ******** 2026-04-04 00:53:35.442845 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.442849 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.442852 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.442856 | orchestrator | 2026-04-04 00:53:35.442860 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-04 00:53:35.442866 | orchestrator | Saturday 04 April 2026 00:53:23 +0000 (0:00:01.388) 0:09:39.309 ******** 2026-04-04 00:53:35.442869 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.442873 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.442880 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.442884 | orchestrator | 2026-04-04 00:53:35.442888 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-04 00:53:35.442891 | orchestrator | Saturday 04 April 2026 00:53:24 +0000 (0:00:01.254) 0:09:40.564 ******** 2026-04-04 00:53:35.442895 | orchestrator | changed: [testbed-node-3] 2026-04-04 00:53:35.442899 | orchestrator | changed: [testbed-node-4] 2026-04-04 00:53:35.442902 | orchestrator | changed: [testbed-node-5] 2026-04-04 00:53:35.442906 | orchestrator | 2026-04-04 00:53:35.442910 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-04 00:53:35.442914 | orchestrator | Saturday 04 April 2026 00:53:26 +0000 (0:00:02.093) 0:09:42.658 ******** 2026-04-04 00:53:35.442917 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442921 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442925 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-04 00:53:35.442929 | orchestrator | 2026-04-04 00:53:35.442932 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-04 00:53:35.442936 | orchestrator | Saturday 04 April 2026 00:53:29 +0000 (0:00:02.598) 0:09:45.257 ******** 2026-04-04 00:53:35.442940 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.442944 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.442947 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.442951 | orchestrator | 2026-04-04 00:53:35.442955 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-04 00:53:35.442961 | orchestrator | Saturday 04 April 2026 00:53:29 +0000 (0:00:00.307) 0:09:45.565 ******** 2026-04-04 00:53:35.442965 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:53:35.442968 | orchestrator | 2026-04-04 00:53:35.442972 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-04 00:53:35.442976 | orchestrator | Saturday 04 April 2026 00:53:30 +0000 (0:00:00.746) 0:09:46.312 ******** 2026-04-04 00:53:35.442980 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.442983 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.442987 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.442991 | orchestrator | 2026-04-04 00:53:35.442995 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-04 00:53:35.442998 | orchestrator | Saturday 04 April 2026 00:53:30 +0000 (0:00:00.277) 0:09:46.589 ******** 2026-04-04 00:53:35.443002 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.443006 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:53:35.443009 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:53:35.443013 | orchestrator | 2026-04-04 00:53:35.443017 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-04 00:53:35.443021 | orchestrator | Saturday 04 April 2026 00:53:31 +0000 (0:00:00.291) 0:09:46.881 ******** 2026-04-04 00:53:35.443034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:53:35.443038 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:53:35.443042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:53:35.443046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:53:35.443049 | orchestrator | 2026-04-04 00:53:35.443053 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-04 00:53:35.443057 | orchestrator | Saturday 04 April 2026 00:53:32 +0000 (0:00:00.873) 0:09:47.754 ******** 2026-04-04 00:53:35.443061 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:53:35.443065 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:53:35.443069 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:53:35.443072 | orchestrator | 2026-04-04 00:53:35.443076 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:53:35.443085 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-04 00:53:35.443090 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-04 00:53:35.443093 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-04 00:53:35.443097 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-04 00:53:35.443101 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-04 00:53:35.443105 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-04 00:53:35.443109 | orchestrator | 2026-04-04 00:53:35.443113 | orchestrator | 2026-04-04 00:53:35.443116 | orchestrator | 2026-04-04 00:53:35.443120 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:53:35.443124 | orchestrator | Saturday 04 April 2026 00:53:32 +0000 (0:00:00.238) 0:09:47.993 ******** 2026-04-04 00:53:35.443128 | orchestrator | =============================================================================== 2026-04-04 00:53:35.443134 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 59.75s 2026-04-04 00:53:35.443137 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 32.12s 2026-04-04 00:53:35.443141 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 22.39s 2026-04-04 00:53:35.443145 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.75s 2026-04-04 00:53:35.443149 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 13.46s 2026-04-04 00:53:35.443152 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.80s 2026-04-04 00:53:35.443156 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.11s 2026-04-04 00:53:35.443160 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 7.30s 2026-04-04 00:53:35.443163 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.18s 2026-04-04 00:53:35.443167 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.55s 2026-04-04 00:53:35.443171 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 6.16s 2026-04-04 00:53:35.443175 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.02s 2026-04-04 00:53:35.443178 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.62s 2026-04-04 00:53:35.443182 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.20s 2026-04-04 00:53:35.443186 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.12s 2026-04-04 00:53:35.443189 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.07s 2026-04-04 00:53:35.443193 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.87s 2026-04-04 00:53:35.443199 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.85s 2026-04-04 00:53:35.443203 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 3.64s 2026-04-04 00:53:35.443206 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.60s 2026-04-04 00:53:35.443210 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:35.443214 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:35.443220 | orchestrator | 2026-04-04 00:53:35 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:35.443224 | orchestrator | 2026-04-04 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:38.472950 | orchestrator | 2026-04-04 00:53:38 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:38.473546 | orchestrator | 2026-04-04 00:53:38 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:38.475412 | orchestrator | 2026-04-04 00:53:38 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:38.475844 | orchestrator | 2026-04-04 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:41.530536 | orchestrator | 2026-04-04 00:53:41 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:41.533471 | orchestrator | 2026-04-04 00:53:41 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:41.535854 | orchestrator | 2026-04-04 00:53:41 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:41.535938 | orchestrator | 2026-04-04 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:44.584425 | orchestrator | 2026-04-04 00:53:44 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:44.586324 | orchestrator | 2026-04-04 00:53:44 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:44.588445 | orchestrator | 2026-04-04 00:53:44 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:44.588646 | orchestrator | 2026-04-04 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:47.637273 | orchestrator | 2026-04-04 00:53:47 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:47.642598 | orchestrator | 2026-04-04 00:53:47 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:47.643937 | orchestrator | 2026-04-04 00:53:47 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:47.643991 | orchestrator | 2026-04-04 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:50.684047 | orchestrator | 2026-04-04 00:53:50 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:50.684926 | orchestrator | 2026-04-04 00:53:50 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:50.687377 | orchestrator | 2026-04-04 00:53:50 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:50.687421 | orchestrator | 2026-04-04 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:53.725182 | orchestrator | 2026-04-04 00:53:53 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:53.726549 | orchestrator | 2026-04-04 00:53:53 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:53.728018 | orchestrator | 2026-04-04 00:53:53 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:53.728064 | orchestrator | 2026-04-04 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:56.772495 | orchestrator | 2026-04-04 00:53:56 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:56.773517 | orchestrator | 2026-04-04 00:53:56 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:56.779149 | orchestrator | 2026-04-04 00:53:56 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:56.779270 | orchestrator | 2026-04-04 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:53:59.813845 | orchestrator | 2026-04-04 00:53:59 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:53:59.816032 | orchestrator | 2026-04-04 00:53:59 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:53:59.817446 | orchestrator | 2026-04-04 00:53:59 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:53:59.817498 | orchestrator | 2026-04-04 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:02.850650 | orchestrator | 2026-04-04 00:54:02 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:02.852475 | orchestrator | 2026-04-04 00:54:02 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:02.854526 | orchestrator | 2026-04-04 00:54:02 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:02.854607 | orchestrator | 2026-04-04 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:05.900298 | orchestrator | 2026-04-04 00:54:05 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:05.902089 | orchestrator | 2026-04-04 00:54:05 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:05.903980 | orchestrator | 2026-04-04 00:54:05 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:05.904008 | orchestrator | 2026-04-04 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:08.949231 | orchestrator | 2026-04-04 00:54:08 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:08.950707 | orchestrator | 2026-04-04 00:54:08 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:08.952347 | orchestrator | 2026-04-04 00:54:08 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:08.952374 | orchestrator | 2026-04-04 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:11.992423 | orchestrator | 2026-04-04 00:54:11 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:11.993624 | orchestrator | 2026-04-04 00:54:11 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:11.994941 | orchestrator | 2026-04-04 00:54:11 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:11.995021 | orchestrator | 2026-04-04 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:15.045026 | orchestrator | 2026-04-04 00:54:15 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:15.048416 | orchestrator | 2026-04-04 00:54:15 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:15.051546 | orchestrator | 2026-04-04 00:54:15 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:15.051598 | orchestrator | 2026-04-04 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:18.102147 | orchestrator | 2026-04-04 00:54:18 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:18.105584 | orchestrator | 2026-04-04 00:54:18 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:18.107216 | orchestrator | 2026-04-04 00:54:18 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:18.107310 | orchestrator | 2026-04-04 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:21.151797 | orchestrator | 2026-04-04 00:54:21 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:21.154169 | orchestrator | 2026-04-04 00:54:21 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:21.156314 | orchestrator | 2026-04-04 00:54:21 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:21.156370 | orchestrator | 2026-04-04 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:24.208480 | orchestrator | 2026-04-04 00:54:24 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:24.210353 | orchestrator | 2026-04-04 00:54:24 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:24.212623 | orchestrator | 2026-04-04 00:54:24 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:24.212815 | orchestrator | 2026-04-04 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:27.261048 | orchestrator | 2026-04-04 00:54:27 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:27.263629 | orchestrator | 2026-04-04 00:54:27 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state STARTED 2026-04-04 00:54:27.265336 | orchestrator | 2026-04-04 00:54:27 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:27.265581 | orchestrator | 2026-04-04 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:30.312882 | orchestrator | 2026-04-04 00:54:30 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:30.314534 | orchestrator | 2026-04-04 00:54:30 | INFO  | Task 44207dc2-da4f-4a35-999e-30dad29296e7 is in state SUCCESS 2026-04-04 00:54:30.316179 | orchestrator | 2026-04-04 00:54:30.316221 | orchestrator | 2026-04-04 00:54:30.316229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:54:30.316236 | orchestrator | 2026-04-04 00:54:30.316243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:54:30.316249 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-04-04 00:54:30.316256 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:30.316262 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:30.316268 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:30.316275 | orchestrator | 2026-04-04 00:54:30.316281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:54:30.316287 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.264) 0:00:00.586 ******** 2026-04-04 00:54:30.316294 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-04 00:54:30.316301 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-04 00:54:30.316307 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-04 00:54:30.316423 | orchestrator | 2026-04-04 00:54:30.316431 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-04 00:54:30.316437 | orchestrator | 2026-04-04 00:54:30.316444 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:54:30.316450 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.258) 0:00:00.844 ******** 2026-04-04 00:54:30.316457 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:30.316464 | orchestrator | 2026-04-04 00:54:30.316471 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-04 00:54:30.316477 | orchestrator | Saturday 04 April 2026 00:52:00 +0000 (0:00:00.517) 0:00:01.362 ******** 2026-04-04 00:54:30.316484 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:54:30.316513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:54:30.316520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-04 00:54:30.316526 | orchestrator | 2026-04-04 00:54:30.316533 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-04 00:54:30.316539 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:02.035) 0:00:03.397 ******** 2026-04-04 00:54:30.316548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316615 | orchestrator | 2026-04-04 00:54:30.316620 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:54:30.316626 | orchestrator | Saturday 04 April 2026 00:52:04 +0000 (0:00:01.513) 0:00:04.911 ******** 2026-04-04 00:54:30.316632 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:30.316637 | orchestrator | 2026-04-04 00:54:30.316643 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-04 00:54:30.316653 | orchestrator | Saturday 04 April 2026 00:52:05 +0000 (0:00:00.561) 0:00:05.472 ******** 2026-04-04 00:54:30.316666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316722 | orchestrator | 2026-04-04 00:54:30.316728 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-04 00:54:30.316735 | orchestrator | Saturday 04 April 2026 00:52:07 +0000 (0:00:02.822) 0:00:08.295 ******** 2026-04-04 00:54:30.316741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316755 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:30.316766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316788 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:30.316794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316809 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:30.316815 | orchestrator | 2026-04-04 00:54:30.316821 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-04 00:54:30.316828 | orchestrator | Saturday 04 April 2026 00:52:08 +0000 (0:00:00.522) 0:00:08.817 ******** 2026-04-04 00:54:30.316837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316860 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:30.316866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316879 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:30.316887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-04 00:54:30.316898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-04 00:54:30.316908 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:30.316916 | orchestrator | 2026-04-04 00:54:30.316939 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-04 00:54:30.316946 | orchestrator | Saturday 04 April 2026 00:52:09 +0000 (0:00:00.719) 0:00:09.538 ******** 2026-04-04 00:54:30.316953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.316981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.316999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.317006 | orchestrator | 2026-04-04 00:54:30.317012 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-04 00:54:30.317019 | orchestrator | Saturday 04 April 2026 00:52:11 +0000 (0:00:02.334) 0:00:11.873 ******** 2026-04-04 00:54:30.317025 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317031 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:30.317038 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:30.317045 | orchestrator | 2026-04-04 00:54:30.317052 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-04 00:54:30.317059 | orchestrator | Saturday 04 April 2026 00:52:13 +0000 (0:00:02.268) 0:00:14.141 ******** 2026-04-04 00:54:30.317066 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317073 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:30.317080 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:30.317087 | orchestrator | 2026-04-04 00:54:30.317095 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-04 00:54:30.317102 | orchestrator | Saturday 04 April 2026 00:52:15 +0000 (0:00:01.948) 0:00:16.090 ******** 2026-04-04 00:54:30.317122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.317140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.317149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-04 00:54:30.317157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.317168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.317184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-04 00:54:30.317192 | orchestrator | 2026-04-04 00:54:30.317199 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:54:30.317206 | orchestrator | Saturday 04 April 2026 00:52:17 +0000 (0:00:02.114) 0:00:18.205 ******** 2026-04-04 00:54:30.317213 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:30.317220 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:30.317227 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:30.317234 | orchestrator | 2026-04-04 00:54:30.317241 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:54:30.317248 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.406) 0:00:18.611 ******** 2026-04-04 00:54:30.317255 | orchestrator | 2026-04-04 00:54:30.317262 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:54:30.317269 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.060) 0:00:18.671 ******** 2026-04-04 00:54:30.317276 | orchestrator | 2026-04-04 00:54:30.317283 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-04 00:54:30.317291 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.060) 0:00:18.732 ******** 2026-04-04 00:54:30.317300 | orchestrator | 2026-04-04 00:54:30.317307 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-04 00:54:30.317314 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.075) 0:00:18.807 ******** 2026-04-04 00:54:30.317321 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:30.317328 | orchestrator | 2026-04-04 00:54:30.317335 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-04 00:54:30.317341 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.183) 0:00:18.990 ******** 2026-04-04 00:54:30.317348 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:30.317355 | orchestrator | 2026-04-04 00:54:30.317361 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-04 00:54:30.317368 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.189) 0:00:19.179 ******** 2026-04-04 00:54:30.317375 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317384 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:30.317391 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:30.317397 | orchestrator | 2026-04-04 00:54:30.317404 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-04 00:54:30.317416 | orchestrator | Saturday 04 April 2026 00:53:11 +0000 (0:00:52.374) 0:01:11.554 ******** 2026-04-04 00:54:30.317422 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317429 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:30.317436 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:30.317443 | orchestrator | 2026-04-04 00:54:30.317450 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-04 00:54:30.317457 | orchestrator | Saturday 04 April 2026 00:54:14 +0000 (0:01:03.463) 0:02:15.018 ******** 2026-04-04 00:54:30.317464 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:30.317471 | orchestrator | 2026-04-04 00:54:30.317477 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-04 00:54:30.317484 | orchestrator | Saturday 04 April 2026 00:54:15 +0000 (0:00:00.733) 0:02:15.752 ******** 2026-04-04 00:54:30.317491 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:30.317497 | orchestrator | 2026-04-04 00:54:30.317504 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-04 00:54:30.317510 | orchestrator | Saturday 04 April 2026 00:54:18 +0000 (0:00:02.787) 0:02:18.539 ******** 2026-04-04 00:54:30.317516 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:30.317522 | orchestrator | 2026-04-04 00:54:30.317529 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-04 00:54:30.317536 | orchestrator | Saturday 04 April 2026 00:54:20 +0000 (0:00:02.090) 0:02:20.630 ******** 2026-04-04 00:54:30.317543 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:30.317550 | orchestrator | 2026-04-04 00:54:30.317557 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-04 00:54:30.317563 | orchestrator | Saturday 04 April 2026 00:54:22 +0000 (0:00:02.722) 0:02:23.352 ******** 2026-04-04 00:54:30.317570 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317577 | orchestrator | 2026-04-04 00:54:30.317584 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-04 00:54:30.317591 | orchestrator | Saturday 04 April 2026 00:54:26 +0000 (0:00:03.391) 0:02:26.744 ******** 2026-04-04 00:54:30.317598 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:30.317605 | orchestrator | 2026-04-04 00:54:30.317616 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:54:30.317624 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 00:54:30.317632 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:54:30.317646 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 00:54:30.317652 | orchestrator | 2026-04-04 00:54:30.317659 | orchestrator | 2026-04-04 00:54:30.317666 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:54:30.317672 | orchestrator | Saturday 04 April 2026 00:54:29 +0000 (0:00:02.745) 0:02:29.489 ******** 2026-04-04 00:54:30.317679 | orchestrator | =============================================================================== 2026-04-04 00:54:30.317685 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 63.46s 2026-04-04 00:54:30.317691 | orchestrator | opensearch : Restart opensearch container ------------------------------ 52.37s 2026-04-04 00:54:30.317698 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.39s 2026-04-04 00:54:30.317704 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.82s 2026-04-04 00:54:30.317711 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.79s 2026-04-04 00:54:30.317717 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2026-04-04 00:54:30.317724 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.72s 2026-04-04 00:54:30.317736 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.33s 2026-04-04 00:54:30.317743 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.27s 2026-04-04 00:54:30.317750 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2026-04-04 00:54:30.317757 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.09s 2026-04-04 00:54:30.317764 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.04s 2026-04-04 00:54:30.317771 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.95s 2026-04-04 00:54:30.317778 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.51s 2026-04-04 00:54:30.317785 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.73s 2026-04-04 00:54:30.317791 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.72s 2026-04-04 00:54:30.317798 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-04-04 00:54:30.317805 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.52s 2026-04-04 00:54:30.317811 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-04-04 00:54:30.317818 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.41s 2026-04-04 00:54:30.317824 | orchestrator | 2026-04-04 00:54:30 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:30.317831 | orchestrator | 2026-04-04 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:33.361591 | orchestrator | 2026-04-04 00:54:33 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:33.363153 | orchestrator | 2026-04-04 00:54:33 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:33.363222 | orchestrator | 2026-04-04 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:36.410208 | orchestrator | 2026-04-04 00:54:36 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:36.411220 | orchestrator | 2026-04-04 00:54:36 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:36.411684 | orchestrator | 2026-04-04 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:39.457484 | orchestrator | 2026-04-04 00:54:39 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:39.460612 | orchestrator | 2026-04-04 00:54:39 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:39.460688 | orchestrator | 2026-04-04 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:42.509890 | orchestrator | 2026-04-04 00:54:42 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:42.511725 | orchestrator | 2026-04-04 00:54:42 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:42.511785 | orchestrator | 2026-04-04 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:45.554445 | orchestrator | 2026-04-04 00:54:45 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:45.556682 | orchestrator | 2026-04-04 00:54:45 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:45.556745 | orchestrator | 2026-04-04 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:48.593744 | orchestrator | 2026-04-04 00:54:48 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:48.593907 | orchestrator | 2026-04-04 00:54:48 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state STARTED 2026-04-04 00:54:48.594044 | orchestrator | 2026-04-04 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:51.636989 | orchestrator | 2026-04-04 00:54:51 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:51.640841 | orchestrator | 2026-04-04 00:54:51 | INFO  | Task 150576fd-01be-4b71-89ab-aee7b65a5e87 is in state SUCCESS 2026-04-04 00:54:51.642365 | orchestrator | 2026-04-04 00:54:51.642428 | orchestrator | 2026-04-04 00:54:51.642435 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-04-04 00:54:51.642444 | orchestrator | 2026-04-04 00:54:51.642451 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-04-04 00:54:51.642458 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.087) 0:00:00.087 ******** 2026-04-04 00:54:51.642468 | orchestrator | ok: [localhost] => { 2026-04-04 00:54:51.642477 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-04-04 00:54:51.642483 | orchestrator | } 2026-04-04 00:54:51.642489 | orchestrator | 2026-04-04 00:54:51.642495 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-04-04 00:54:51.642502 | orchestrator | Saturday 04 April 2026 00:51:59 +0000 (0:00:00.040) 0:00:00.128 ******** 2026-04-04 00:54:51.642508 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-04-04 00:54:51.642516 | orchestrator | ...ignoring 2026-04-04 00:54:51.642522 | orchestrator | 2026-04-04 00:54:51.642528 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-04-04 00:54:51.642535 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:02.822) 0:00:02.950 ******** 2026-04-04 00:54:51.642597 | orchestrator | skipping: [localhost] 2026-04-04 00:54:51.642605 | orchestrator | 2026-04-04 00:54:51.642674 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-04-04 00:54:51.642681 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:00.063) 0:00:03.014 ******** 2026-04-04 00:54:51.642685 | orchestrator | ok: [localhost] 2026-04-04 00:54:51.642690 | orchestrator | 2026-04-04 00:54:51.642694 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:54:51.642698 | orchestrator | 2026-04-04 00:54:51.642702 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:54:51.642706 | orchestrator | Saturday 04 April 2026 00:52:02 +0000 (0:00:00.225) 0:00:03.240 ******** 2026-04-04 00:54:51.642710 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.642714 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.642718 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.642721 | orchestrator | 2026-04-04 00:54:51.642725 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:54:51.642729 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:00.284) 0:00:03.525 ******** 2026-04-04 00:54:51.642733 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-04 00:54:51.642738 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-04 00:54:51.642741 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-04 00:54:51.642745 | orchestrator | 2026-04-04 00:54:51.642749 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-04 00:54:51.642753 | orchestrator | 2026-04-04 00:54:51.642757 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-04 00:54:51.642761 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:00.508) 0:00:04.033 ******** 2026-04-04 00:54:51.642764 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 00:54:51.642768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 00:54:51.642772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 00:54:51.642776 | orchestrator | 2026-04-04 00:54:51.642780 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:54:51.642801 | orchestrator | Saturday 04 April 2026 00:52:03 +0000 (0:00:00.344) 0:00:04.377 ******** 2026-04-04 00:54:51.642806 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:51.643084 | orchestrator | 2026-04-04 00:54:51.643103 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-04 00:54:51.643109 | orchestrator | Saturday 04 April 2026 00:52:04 +0000 (0:00:00.590) 0:00:04.967 ******** 2026-04-04 00:54:51.643153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643194 | orchestrator | 2026-04-04 00:54:51.643208 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-04 00:54:51.643215 | orchestrator | Saturday 04 April 2026 00:52:07 +0000 (0:00:02.933) 0:00:07.901 ******** 2026-04-04 00:54:51.643221 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643229 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643235 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.643240 | orchestrator | 2026-04-04 00:54:51.643246 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-04 00:54:51.643253 | orchestrator | Saturday 04 April 2026 00:52:08 +0000 (0:00:00.582) 0:00:08.483 ******** 2026-04-04 00:54:51.643260 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643267 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643273 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.643280 | orchestrator | 2026-04-04 00:54:51.643286 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-04 00:54:51.643292 | orchestrator | Saturday 04 April 2026 00:52:09 +0000 (0:00:01.462) 0:00:09.946 ******** 2026-04-04 00:54:51.643333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643375 | orchestrator | 2026-04-04 00:54:51.643381 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-04 00:54:51.643387 | orchestrator | Saturday 04 April 2026 00:52:12 +0000 (0:00:03.120) 0:00:13.066 ******** 2026-04-04 00:54:51.643393 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643399 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643405 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.643411 | orchestrator | 2026-04-04 00:54:51.643417 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-04 00:54:51.643423 | orchestrator | Saturday 04 April 2026 00:52:13 +0000 (0:00:01.057) 0:00:14.124 ******** 2026-04-04 00:54:51.643429 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:51.643435 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:51.643442 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.643448 | orchestrator | 2026-04-04 00:54:51.643453 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:54:51.643460 | orchestrator | Saturday 04 April 2026 00:52:17 +0000 (0:00:04.130) 0:00:18.254 ******** 2026-04-04 00:54:51.643466 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:51.643473 | orchestrator | 2026-04-04 00:54:51.643503 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-04 00:54:51.643509 | orchestrator | Saturday 04 April 2026 00:52:18 +0000 (0:00:00.460) 0:00:18.715 ******** 2026-04-04 00:54:51.643529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643534 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.643538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643550 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643566 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643570 | orchestrator | 2026-04-04 00:54:51.643575 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-04 00:54:51.643581 | orchestrator | Saturday 04 April 2026 00:52:21 +0000 (0:00:03.136) 0:00:21.851 ******** 2026-04-04 00:54:51.643587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643652 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643677 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643715 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.643721 | orchestrator | 2026-04-04 00:54:51.643727 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-04 00:54:51.643734 | orchestrator | Saturday 04 April 2026 00:52:23 +0000 (0:00:02.361) 0:00:24.213 ******** 2026-04-04 00:54:51.643746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643753 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.643767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643782 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.643794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-04 00:54:51.643802 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.643808 | orchestrator | 2026-04-04 00:54:51.643815 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-04 00:54:51.643821 | orchestrator | Saturday 04 April 2026 00:52:26 +0000 (0:00:02.822) 0:00:27.035 ******** 2026-04-04 00:54:51.643834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-04 00:54:51.643915 | orchestrator | 2026-04-04 00:54:51.643921 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-04 00:54:51.643927 | orchestrator | Saturday 04 April 2026 00:52:29 +0000 (0:00:03.109) 0:00:30.145 ******** 2026-04-04 00:54:51.643932 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.643938 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:51.643943 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:51.643948 | orchestrator | 2026-04-04 00:54:51.643954 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-04 00:54:51.643959 | orchestrator | Saturday 04 April 2026 00:52:30 +0000 (0:00:00.873) 0:00:31.018 ******** 2026-04-04 00:54:51.643965 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.643972 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.643979 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.643985 | orchestrator | 2026-04-04 00:54:51.643991 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-04 00:54:51.643998 | orchestrator | Saturday 04 April 2026 00:52:30 +0000 (0:00:00.290) 0:00:31.308 ******** 2026-04-04 00:54:51.644005 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644010 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.644016 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.644022 | orchestrator | 2026-04-04 00:54:51.644028 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-04 00:54:51.644034 | orchestrator | Saturday 04 April 2026 00:52:31 +0000 (0:00:00.316) 0:00:31.625 ******** 2026-04-04 00:54:51.644042 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-04 00:54:51.644049 | orchestrator | ...ignoring 2026-04-04 00:54:51.644055 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-04 00:54:51.644061 | orchestrator | ...ignoring 2026-04-04 00:54:51.644067 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-04 00:54:51.644073 | orchestrator | ...ignoring 2026-04-04 00:54:51.644079 | orchestrator | 2026-04-04 00:54:51.644085 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-04 00:54:51.644092 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:11.109) 0:00:42.734 ******** 2026-04-04 00:54:51.644098 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644104 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.644110 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.644116 | orchestrator | 2026-04-04 00:54:51.644123 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-04 00:54:51.644129 | orchestrator | Saturday 04 April 2026 00:52:42 +0000 (0:00:00.365) 0:00:43.100 ******** 2026-04-04 00:54:51.644136 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644142 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644148 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644162 | orchestrator | 2026-04-04 00:54:51.644168 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-04 00:54:51.644173 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.351) 0:00:43.451 ******** 2026-04-04 00:54:51.644180 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644191 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644198 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644204 | orchestrator | 2026-04-04 00:54:51.644210 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-04 00:54:51.644216 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.367) 0:00:43.819 ******** 2026-04-04 00:54:51.644222 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644228 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644234 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644240 | orchestrator | 2026-04-04 00:54:51.644246 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-04 00:54:51.644252 | orchestrator | Saturday 04 April 2026 00:52:43 +0000 (0:00:00.526) 0:00:44.346 ******** 2026-04-04 00:54:51.644257 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644264 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.644270 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.644276 | orchestrator | 2026-04-04 00:54:51.644282 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-04 00:54:51.644289 | orchestrator | Saturday 04 April 2026 00:52:44 +0000 (0:00:00.355) 0:00:44.702 ******** 2026-04-04 00:54:51.644302 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644310 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644317 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644324 | orchestrator | 2026-04-04 00:54:51.644331 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:54:51.644336 | orchestrator | Saturday 04 April 2026 00:52:44 +0000 (0:00:00.341) 0:00:45.043 ******** 2026-04-04 00:54:51.644339 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644343 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644347 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-04 00:54:51.644351 | orchestrator | 2026-04-04 00:54:51.644355 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-04 00:54:51.644359 | orchestrator | Saturday 04 April 2026 00:52:44 +0000 (0:00:00.327) 0:00:45.370 ******** 2026-04-04 00:54:51.644363 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644367 | orchestrator | 2026-04-04 00:54:51.644370 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-04 00:54:51.644374 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:10.103) 0:00:55.473 ******** 2026-04-04 00:54:51.644378 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644382 | orchestrator | 2026-04-04 00:54:51.644385 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 00:54:51.644390 | orchestrator | Saturday 04 April 2026 00:52:55 +0000 (0:00:00.218) 0:00:55.692 ******** 2026-04-04 00:54:51.644393 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644397 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644401 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644407 | orchestrator | 2026-04-04 00:54:51.644413 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-04 00:54:51.644419 | orchestrator | Saturday 04 April 2026 00:52:56 +0000 (0:00:00.704) 0:00:56.397 ******** 2026-04-04 00:54:51.644425 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644433 | orchestrator | 2026-04-04 00:54:51.644440 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-04 00:54:51.644446 | orchestrator | Saturday 04 April 2026 00:53:03 +0000 (0:00:07.451) 0:01:03.849 ******** 2026-04-04 00:54:51.644452 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644458 | orchestrator | 2026-04-04 00:54:51.644462 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-04 00:54:51.644472 | orchestrator | Saturday 04 April 2026 00:53:05 +0000 (0:00:01.623) 0:01:05.473 ******** 2026-04-04 00:54:51.644476 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644479 | orchestrator | 2026-04-04 00:54:51.644483 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-04 00:54:51.644488 | orchestrator | Saturday 04 April 2026 00:53:07 +0000 (0:00:02.663) 0:01:08.136 ******** 2026-04-04 00:54:51.644492 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644495 | orchestrator | 2026-04-04 00:54:51.644500 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-04 00:54:51.644503 | orchestrator | Saturday 04 April 2026 00:53:07 +0000 (0:00:00.114) 0:01:08.251 ******** 2026-04-04 00:54:51.644507 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644511 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644515 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644519 | orchestrator | 2026-04-04 00:54:51.644523 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-04 00:54:51.644526 | orchestrator | Saturday 04 April 2026 00:53:08 +0000 (0:00:00.306) 0:01:08.557 ******** 2026-04-04 00:54:51.644530 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.644534 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:51.644538 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:51.644541 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-04 00:54:51.644545 | orchestrator | 2026-04-04 00:54:51.644549 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-04 00:54:51.644553 | orchestrator | skipping: no hosts matched 2026-04-04 00:54:51.644557 | orchestrator | 2026-04-04 00:54:51.644561 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 00:54:51.644565 | orchestrator | 2026-04-04 00:54:51.644571 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:54:51.644577 | orchestrator | Saturday 04 April 2026 00:53:08 +0000 (0:00:00.317) 0:01:08.874 ******** 2026-04-04 00:54:51.644584 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:54:51.644590 | orchestrator | 2026-04-04 00:54:51.644597 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:54:51.644604 | orchestrator | Saturday 04 April 2026 00:53:26 +0000 (0:00:17.826) 0:01:26.701 ******** 2026-04-04 00:54:51.644608 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.644612 | orchestrator | 2026-04-04 00:54:51.644616 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:54:51.644619 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:15.545) 0:01:42.246 ******** 2026-04-04 00:54:51.644627 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.644631 | orchestrator | 2026-04-04 00:54:51.644635 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 00:54:51.644639 | orchestrator | 2026-04-04 00:54:51.644643 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:54:51.644646 | orchestrator | Saturday 04 April 2026 00:53:44 +0000 (0:00:02.568) 0:01:44.815 ******** 2026-04-04 00:54:51.644650 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:54:51.644654 | orchestrator | 2026-04-04 00:54:51.644658 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:54:51.644661 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:15.997) 0:02:00.812 ******** 2026-04-04 00:54:51.644665 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.644669 | orchestrator | 2026-04-04 00:54:51.644675 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:54:51.644681 | orchestrator | Saturday 04 April 2026 00:54:16 +0000 (0:00:15.895) 0:02:16.707 ******** 2026-04-04 00:54:51.644688 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.644693 | orchestrator | 2026-04-04 00:54:51.644700 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-04 00:54:51.644712 | orchestrator | 2026-04-04 00:54:51.644725 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-04 00:54:51.644731 | orchestrator | Saturday 04 April 2026 00:54:18 +0000 (0:00:02.352) 0:02:19.060 ******** 2026-04-04 00:54:51.644738 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644744 | orchestrator | 2026-04-04 00:54:51.644750 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-04 00:54:51.644757 | orchestrator | Saturday 04 April 2026 00:54:30 +0000 (0:00:11.381) 0:02:30.441 ******** 2026-04-04 00:54:51.644762 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644766 | orchestrator | 2026-04-04 00:54:51.644770 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-04 00:54:51.644776 | orchestrator | Saturday 04 April 2026 00:54:34 +0000 (0:00:04.554) 0:02:34.995 ******** 2026-04-04 00:54:51.644782 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644790 | orchestrator | 2026-04-04 00:54:51.644797 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-04 00:54:51.644803 | orchestrator | 2026-04-04 00:54:51.644809 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-04 00:54:51.644817 | orchestrator | Saturday 04 April 2026 00:54:36 +0000 (0:00:02.376) 0:02:37.372 ******** 2026-04-04 00:54:51.644821 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:54:51.644825 | orchestrator | 2026-04-04 00:54:51.644829 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-04 00:54:51.644833 | orchestrator | Saturday 04 April 2026 00:54:37 +0000 (0:00:00.639) 0:02:38.011 ******** 2026-04-04 00:54:51.644837 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644842 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644850 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644854 | orchestrator | 2026-04-04 00:54:51.644857 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-04 00:54:51.644862 | orchestrator | Saturday 04 April 2026 00:54:39 +0000 (0:00:02.327) 0:02:40.339 ******** 2026-04-04 00:54:51.644866 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644869 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644873 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644877 | orchestrator | 2026-04-04 00:54:51.644881 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-04 00:54:51.644885 | orchestrator | Saturday 04 April 2026 00:54:42 +0000 (0:00:02.853) 0:02:43.193 ******** 2026-04-04 00:54:51.644916 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644922 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644928 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644933 | orchestrator | 2026-04-04 00:54:51.644939 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-04 00:54:51.644944 | orchestrator | Saturday 04 April 2026 00:54:45 +0000 (0:00:02.457) 0:02:45.650 ******** 2026-04-04 00:54:51.644949 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.644961 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.644970 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:54:51.644975 | orchestrator | 2026-04-04 00:54:51.644981 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-04 00:54:51.644987 | orchestrator | Saturday 04 April 2026 00:54:47 +0000 (0:00:02.479) 0:02:48.130 ******** 2026-04-04 00:54:51.644992 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:54:51.644999 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:54:51.645005 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:54:51.645012 | orchestrator | 2026-04-04 00:54:51.645018 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-04 00:54:51.645025 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:02.484) 0:02:50.614 ******** 2026-04-04 00:54:51.645032 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:54:51.645038 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:54:51.645044 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:54:51.645056 | orchestrator | 2026-04-04 00:54:51.645062 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:54:51.645068 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-04-04 00:54:51.645075 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-04 00:54:51.645084 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-04 00:54:51.645089 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-04 00:54:51.645092 | orchestrator | 2026-04-04 00:54:51.645096 | orchestrator | 2026-04-04 00:54:51.645104 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:54:51.645108 | orchestrator | Saturday 04 April 2026 00:54:50 +0000 (0:00:00.214) 0:02:50.828 ******** 2026-04-04 00:54:51.645112 | orchestrator | =============================================================================== 2026-04-04 00:54:51.645116 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.82s 2026-04-04 00:54:51.645120 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.44s 2026-04-04 00:54:51.645123 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.38s 2026-04-04 00:54:51.645127 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.11s 2026-04-04 00:54:51.645131 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.10s 2026-04-04 00:54:51.645134 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.45s 2026-04-04 00:54:51.645144 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2026-04-04 00:54:51.645149 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2026-04-04 00:54:51.645155 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.13s 2026-04-04 00:54:51.645161 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.14s 2026-04-04 00:54:51.645166 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.12s 2026-04-04 00:54:51.645171 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.11s 2026-04-04 00:54:51.645177 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.93s 2026-04-04 00:54:51.645183 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.85s 2026-04-04 00:54:51.645189 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.82s 2026-04-04 00:54:51.645195 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2026-04-04 00:54:51.645202 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2026-04-04 00:54:51.645208 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.48s 2026-04-04 00:54:51.645215 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.48s 2026-04-04 00:54:51.645221 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.46s 2026-04-04 00:54:51.645228 | orchestrator | 2026-04-04 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:54.682808 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:54:54.685389 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:54.687540 | orchestrator | 2026-04-04 00:54:54 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:54:54.687657 | orchestrator | 2026-04-04 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:54:57.722260 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:54:57.723239 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:54:57.724575 | orchestrator | 2026-04-04 00:54:57 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:54:57.725416 | orchestrator | 2026-04-04 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:00.766974 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:00.769171 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:00.770440 | orchestrator | 2026-04-04 00:55:00 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:00.770708 | orchestrator | 2026-04-04 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:03.809566 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:03.810913 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:03.813316 | orchestrator | 2026-04-04 00:55:03 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:03.813590 | orchestrator | 2026-04-04 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:06.853623 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:06.854137 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:06.854990 | orchestrator | 2026-04-04 00:55:06 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:06.855292 | orchestrator | 2026-04-04 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:09.884531 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:09.885127 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:09.885936 | orchestrator | 2026-04-04 00:55:09 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:09.886088 | orchestrator | 2026-04-04 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:12.932621 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:12.934244 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:12.935356 | orchestrator | 2026-04-04 00:55:12 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:12.935387 | orchestrator | 2026-04-04 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:15.969754 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:15.970160 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:15.971107 | orchestrator | 2026-04-04 00:55:15 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:15.971153 | orchestrator | 2026-04-04 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:19.016199 | orchestrator | 2026-04-04 00:55:19 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:19.017323 | orchestrator | 2026-04-04 00:55:19 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:19.018150 | orchestrator | 2026-04-04 00:55:19 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:19.018174 | orchestrator | 2026-04-04 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:22.063557 | orchestrator | 2026-04-04 00:55:22 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:22.065274 | orchestrator | 2026-04-04 00:55:22 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:22.066316 | orchestrator | 2026-04-04 00:55:22 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:22.066365 | orchestrator | 2026-04-04 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:25.100515 | orchestrator | 2026-04-04 00:55:25 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:25.103624 | orchestrator | 2026-04-04 00:55:25 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state STARTED 2026-04-04 00:55:25.106279 | orchestrator | 2026-04-04 00:55:25 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:25.106330 | orchestrator | 2026-04-04 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:28.142436 | orchestrator | 2026-04-04 00:55:28 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:28.147233 | orchestrator | 2026-04-04 00:55:28 | INFO  | Task d6e980a4-1c95-4f41-87f5-f0dc934bb6d0 is in state SUCCESS 2026-04-04 00:55:28.148957 | orchestrator | 2026-04-04 00:55:28.148996 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:55:28.149002 | orchestrator | 2.16.14 2026-04-04 00:55:28.149006 | orchestrator | 2026-04-04 00:55:28.149011 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-04 00:55:28.149019 | orchestrator | 2026-04-04 00:55:28.149026 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-04 00:55:28.149036 | orchestrator | Saturday 04 April 2026 00:53:36 +0000 (0:00:00.496) 0:00:00.496 ******** 2026-04-04 00:55:28.149044 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:28.149051 | orchestrator | 2026-04-04 00:55:28.149058 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-04 00:55:28.149065 | orchestrator | Saturday 04 April 2026 00:53:37 +0000 (0:00:00.528) 0:00:01.024 ******** 2026-04-04 00:55:28.149071 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149077 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149084 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149090 | orchestrator | 2026-04-04 00:55:28.149097 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-04 00:55:28.149104 | orchestrator | Saturday 04 April 2026 00:53:38 +0000 (0:00:00.931) 0:00:01.956 ******** 2026-04-04 00:55:28.149112 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149119 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149126 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149133 | orchestrator | 2026-04-04 00:55:28.149149 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-04 00:55:28.149156 | orchestrator | Saturday 04 April 2026 00:53:38 +0000 (0:00:00.265) 0:00:02.221 ******** 2026-04-04 00:55:28.149160 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149164 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149168 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149172 | orchestrator | 2026-04-04 00:55:28.149188 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-04 00:55:28.149192 | orchestrator | Saturday 04 April 2026 00:53:39 +0000 (0:00:00.728) 0:00:02.950 ******** 2026-04-04 00:55:28.149196 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149200 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149203 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149207 | orchestrator | 2026-04-04 00:55:28.149211 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-04 00:55:28.149215 | orchestrator | Saturday 04 April 2026 00:53:39 +0000 (0:00:00.299) 0:00:03.249 ******** 2026-04-04 00:55:28.149218 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149222 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149226 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149230 | orchestrator | 2026-04-04 00:55:28.149234 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-04 00:55:28.149238 | orchestrator | Saturday 04 April 2026 00:53:39 +0000 (0:00:00.287) 0:00:03.536 ******** 2026-04-04 00:55:28.149241 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149245 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149249 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149253 | orchestrator | 2026-04-04 00:55:28.149256 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-04 00:55:28.149260 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.307) 0:00:03.843 ******** 2026-04-04 00:55:28.149264 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.149269 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.149272 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.149278 | orchestrator | 2026-04-04 00:55:28.149284 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-04 00:55:28.149293 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.506) 0:00:04.350 ******** 2026-04-04 00:55:28.149315 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149321 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149394 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149399 | orchestrator | 2026-04-04 00:55:28.149403 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-04 00:55:28.149407 | orchestrator | Saturday 04 April 2026 00:53:40 +0000 (0:00:00.279) 0:00:04.629 ******** 2026-04-04 00:55:28.149410 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:28.149458 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:28.149462 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:28.149466 | orchestrator | 2026-04-04 00:55:28.149469 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-04 00:55:28.149473 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:00.634) 0:00:05.263 ******** 2026-04-04 00:55:28.149477 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149480 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149484 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149488 | orchestrator | 2026-04-04 00:55:28.149491 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-04 00:55:28.149495 | orchestrator | Saturday 04 April 2026 00:53:41 +0000 (0:00:00.406) 0:00:05.670 ******** 2026-04-04 00:55:28.149499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:28.149503 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:28.149506 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:28.149510 | orchestrator | 2026-04-04 00:55:28.149514 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-04 00:55:28.149776 | orchestrator | Saturday 04 April 2026 00:53:44 +0000 (0:00:03.114) 0:00:08.785 ******** 2026-04-04 00:55:28.149787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:55:28.149797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:55:28.149801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:55:28.149805 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.149809 | orchestrator | 2026-04-04 00:55:28.149847 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-04 00:55:28.149853 | orchestrator | Saturday 04 April 2026 00:53:45 +0000 (0:00:00.366) 0:00:09.151 ******** 2026-04-04 00:55:28.149858 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149871 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.149875 | orchestrator | 2026-04-04 00:55:28.149879 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-04 00:55:28.149887 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:00.698) 0:00:09.850 ******** 2026-04-04 00:55:28.149892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.149906 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.149909 | orchestrator | 2026-04-04 00:55:28.149913 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-04 00:55:28.149917 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:00.147) 0:00:09.998 ******** 2026-04-04 00:55:28.149922 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1b8880b69c8b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-04 00:53:42.846083', 'end': '2026-04-04 00:53:42.884932', 'delta': '0:00:00.038849', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1b8880b69c8b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-04 00:55:28.149931 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cc2499d80bc6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-04 00:53:43.894348', 'end': '2026-04-04 00:53:43.932089', 'delta': '0:00:00.037741', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cc2499d80bc6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-04 00:55:28.149948 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'adbaa8de2c30', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-04 00:53:44.785365', 'end': '2026-04-04 00:53:44.842247', 'delta': '0:00:00.056882', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['adbaa8de2c30'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-04 00:55:28.149953 | orchestrator | 2026-04-04 00:55:28.149957 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-04 00:55:28.149961 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:00.307) 0:00:10.305 ******** 2026-04-04 00:55:28.149965 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.149968 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.149972 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.149976 | orchestrator | 2026-04-04 00:55:28.149982 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-04 00:55:28.149986 | orchestrator | Saturday 04 April 2026 00:53:46 +0000 (0:00:00.415) 0:00:10.721 ******** 2026-04-04 00:55:28.149990 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-04 00:55:28.149994 | orchestrator | 2026-04-04 00:55:28.149997 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-04 00:55:28.150001 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:01.389) 0:00:12.111 ******** 2026-04-04 00:55:28.150005 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150008 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150037 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150043 | orchestrator | 2026-04-04 00:55:28.150047 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-04 00:55:28.150050 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.254) 0:00:12.366 ******** 2026-04-04 00:55:28.150054 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150058 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150062 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150065 | orchestrator | 2026-04-04 00:55:28.150069 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:55:28.150073 | orchestrator | Saturday 04 April 2026 00:53:48 +0000 (0:00:00.360) 0:00:12.726 ******** 2026-04-04 00:55:28.150087 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150091 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150095 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150098 | orchestrator | 2026-04-04 00:55:28.150102 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-04 00:55:28.150106 | orchestrator | Saturday 04 April 2026 00:53:49 +0000 (0:00:00.370) 0:00:13.097 ******** 2026-04-04 00:55:28.150109 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.150114 | orchestrator | 2026-04-04 00:55:28.150124 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-04 00:55:28.150134 | orchestrator | Saturday 04 April 2026 00:53:49 +0000 (0:00:00.117) 0:00:13.215 ******** 2026-04-04 00:55:28.150140 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150146 | orchestrator | 2026-04-04 00:55:28.150152 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-04 00:55:28.150157 | orchestrator | Saturday 04 April 2026 00:53:49 +0000 (0:00:00.200) 0:00:13.415 ******** 2026-04-04 00:55:28.150163 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150170 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150176 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150182 | orchestrator | 2026-04-04 00:55:28.150187 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-04 00:55:28.150193 | orchestrator | Saturday 04 April 2026 00:53:49 +0000 (0:00:00.258) 0:00:13.674 ******** 2026-04-04 00:55:28.150199 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150206 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150212 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150218 | orchestrator | 2026-04-04 00:55:28.150224 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-04 00:55:28.150230 | orchestrator | Saturday 04 April 2026 00:53:50 +0000 (0:00:00.261) 0:00:13.935 ******** 2026-04-04 00:55:28.150236 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150242 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150248 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150254 | orchestrator | 2026-04-04 00:55:28.150261 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-04 00:55:28.150267 | orchestrator | Saturday 04 April 2026 00:53:50 +0000 (0:00:00.447) 0:00:14.383 ******** 2026-04-04 00:55:28.150273 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150277 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150281 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150285 | orchestrator | 2026-04-04 00:55:28.150288 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-04 00:55:28.150293 | orchestrator | Saturday 04 April 2026 00:53:50 +0000 (0:00:00.275) 0:00:14.658 ******** 2026-04-04 00:55:28.150300 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150307 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150313 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150319 | orchestrator | 2026-04-04 00:55:28.150326 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-04 00:55:28.150330 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.227) 0:00:14.885 ******** 2026-04-04 00:55:28.150334 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150337 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150341 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150364 | orchestrator | 2026-04-04 00:55:28.150368 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-04 00:55:28.150372 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.330) 0:00:15.215 ******** 2026-04-04 00:55:28.150376 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150380 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150384 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150387 | orchestrator | 2026-04-04 00:55:28.150391 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-04 00:55:28.150395 | orchestrator | Saturday 04 April 2026 00:53:51 +0000 (0:00:00.388) 0:00:15.604 ******** 2026-04-04 00:55:28.150403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519', 'dm-uuid-LVM-M9GI4tNPMhIL9E0kFjOEeN17N1f5LxVN4O5GSm4RLJBoiT8R2ghPV5w3wf3nWemL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906', 'dm-uuid-LVM-r0lB9UuGpQCf3kMFs8zvHlZuRtH2PKlnpVETyxuv7nEpJmzX6s3HbLpsn28uK4Tg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ACeHCA-O1ys-44K7-0m3K-pzzu-98Hz-IMyawd', 'scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8', 'scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f', 'dm-uuid-LVM-HT7voBypEw31a9Cjr4Fa1wcBJgYUr5EfbXh6BXfE3G6hwaB7cC5YacX2YO8ZHwbY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcJoM8-3FHZ-1ME2-NVPJ-2WCZ-VPLE-T2V5u3', 'scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737', 'scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa', 'dm-uuid-LVM-qMvt7xqAxXG2O8BdCvvt7q9bmWDLB7rZXdoxIa8uZ0hlbHFQg22690Xpbwin8xpu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9', 'scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150599 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.150605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159', 'dm-uuid-LVM-6PCLJiqtncSsW11ER2Vse6KNZiossrrndGP1WdFeKSiTlqeTvJKRFEmvnrMRuJtR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdOOIN-sqdQ-Yzbu-z9Ck-YhV9-4eU3-Q05miU', 'scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55', 'scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z5si0g-bXnY-Uer7-JCzi-gXmG-Q6Ma-iD3UG0', 'scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159', 'scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7', 'dm-uuid-LVM-tV9ZTDPHn1Gk7L263V8luxEzWE16Jn61SmQpaaQwl00FKWtcO1GG0ZAv69UTxQW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8', 'scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150684 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.150692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-04 00:55:28.150748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rV2tHg-lSWp-N667-0UVN-DDUM-luRq-WRLITf', 'scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae', 'scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jAoZd6-7gHp-96M7-Ytyk-lMu0-4WAT-KhB2fY', 'scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905', 'scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5', 'scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-04 00:55:28.150848 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.150852 | orchestrator | 2026-04-04 00:55:28.150856 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-04 00:55:28.150860 | orchestrator | Saturday 04 April 2026 00:53:52 +0000 (0:00:00.492) 0:00:16.097 ******** 2026-04-04 00:55:28.150867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519', 'dm-uuid-LVM-M9GI4tNPMhIL9E0kFjOEeN17N1f5LxVN4O5GSm4RLJBoiT8R2ghPV5w3wf3nWemL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906', 'dm-uuid-LVM-r0lB9UuGpQCf3kMFs8zvHlZuRtH2PKlnpVETyxuv7nEpJmzX6s3HbLpsn28uK4Tg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150880 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150895 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150908 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f', 'dm-uuid-LVM-HT7voBypEw31a9Cjr4Fa1wcBJgYUr5EfbXh6BXfE3G6hwaB7cC5YacX2YO8ZHwbY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa', 'dm-uuid-LVM-qMvt7xqAxXG2O8BdCvvt7q9bmWDLB7rZXdoxIa8uZ0hlbHFQg22690Xpbwin8xpu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16', 'scsi-SQEMU_QEMU_HARDDISK_1df993b0-f2e3-4765-ad08-d2a9ca0c61ae-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150976 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f0c57fe1--7323--5f70--a575--22ad75776519-osd--block--f0c57fe1--7323--5f70--a575--22ad75776519'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ACeHCA-O1ys-44K7-0m3K-pzzu-98Hz-IMyawd', 'scsi-0QEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8', 'scsi-SQEMU_QEMU_HARDDISK_aa04dcb3-9f04-4660-8785-ade3b95c2bd8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e865913--a109--5f6b--9820--a5901c50a906-osd--block--1e865913--a109--5f6b--9820--a5901c50a906'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcJoM8-3FHZ-1ME2-NVPJ-2WCZ-VPLE-T2V5u3', 'scsi-0QEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737', 'scsi-SQEMU_QEMU_HARDDISK_4d96aee6-67ba-49f8-bc7c-2d85a42af737'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.150988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9', 'scsi-SQEMU_QEMU_HARDDISK_5b6ff0f2-3c26-4156-872a-5361d1bd2bb9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151064 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-03-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151080 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7223361-eb25-4952-96a2-78fcadfdbdca-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2f7bbb1d--c278--5154--a1d3--309d62b79a2f-osd--block--2f7bbb1d--c278--5154--a1d3--309d62b79a2f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdOOIN-sqdQ-Yzbu-z9Ck-YhV9-4eU3-Q05miU', 'scsi-0QEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55', 'scsi-SQEMU_QEMU_HARDDISK_aea0a796-d357-4fa7-8d72-1f8005c02d55'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151105 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159', 'dm-uuid-LVM-6PCLJiqtncSsW11ER2Vse6KNZiossrrndGP1WdFeKSiTlqeTvJKRFEmvnrMRuJtR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa-osd--block--b98f96ba--ddcd--5dd8--8e53--77fbcda444fa'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z5si0g-bXnY-Uer7-JCzi-gXmG-Q6Ma-iD3UG0', 'scsi-0QEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159', 'scsi-SQEMU_QEMU_HARDDISK_86e206f3-2d5a-4624-95fc-aec866356159'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7', 'dm-uuid-LVM-tV9ZTDPHn1Gk7L263V8luxEzWE16Jn61SmQpaaQwl00FKWtcO1GG0ZAv69UTxQW3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8', 'scsi-SQEMU_QEMU_HARDDISK_06ea839a-b266-4e51-93b3-b1dda83a55b8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151144 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151148 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151159 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16', 'scsi-SQEMU_QEMU_HARDDISK_43a170e0-9151-405a-b413-7377f27a751c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--92575011--0645--5cdf--badf--43ad86ae8159-osd--block--92575011--0645--5cdf--badf--43ad86ae8159'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rV2tHg-lSWp-N667-0UVN-DDUM-luRq-WRLITf', 'scsi-0QEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae', 'scsi-SQEMU_QEMU_HARDDISK_b430c263-2f81-418d-8192-e181c70d45ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--35995e13--d19e--546f--ae20--ff296f4077c7-osd--block--35995e13--d19e--546f--ae20--ff296f4077c7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jAoZd6-7gHp-96M7-Ytyk-lMu0-4WAT-KhB2fY', 'scsi-0QEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905', 'scsi-SQEMU_QEMU_HARDDISK_19f8077a-5fb2-4798-9d2e-069ef293e905'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5', 'scsi-SQEMU_QEMU_HARDDISK_e5c55c1d-a7d7-4703-805a-3622b0d8a5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-04-00-03-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-04 00:55:28.151211 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151215 | orchestrator | 2026-04-04 00:55:28.151219 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-04 00:55:28.151223 | orchestrator | Saturday 04 April 2026 00:53:52 +0000 (0:00:00.539) 0:00:16.636 ******** 2026-04-04 00:55:28.151227 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.151231 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.151237 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.151243 | orchestrator | 2026-04-04 00:55:28.151301 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-04 00:55:28.151308 | orchestrator | Saturday 04 April 2026 00:53:53 +0000 (0:00:00.592) 0:00:17.228 ******** 2026-04-04 00:55:28.151315 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.151321 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.151327 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.151333 | orchestrator | 2026-04-04 00:55:28.151346 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:55:28.151353 | orchestrator | Saturday 04 April 2026 00:53:53 +0000 (0:00:00.365) 0:00:17.594 ******** 2026-04-04 00:55:28.151357 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.151361 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.151365 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.151369 | orchestrator | 2026-04-04 00:55:28.151372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:55:28.151376 | orchestrator | Saturday 04 April 2026 00:53:55 +0000 (0:00:01.627) 0:00:19.221 ******** 2026-04-04 00:55:28.151380 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151384 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151392 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151395 | orchestrator | 2026-04-04 00:55:28.151399 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-04 00:55:28.151403 | orchestrator | Saturday 04 April 2026 00:53:55 +0000 (0:00:00.281) 0:00:19.503 ******** 2026-04-04 00:55:28.151407 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151411 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151414 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151418 | orchestrator | 2026-04-04 00:55:28.151422 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-04 00:55:28.151426 | orchestrator | Saturday 04 April 2026 00:53:56 +0000 (0:00:00.446) 0:00:19.949 ******** 2026-04-04 00:55:28.151430 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151433 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151437 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151442 | orchestrator | 2026-04-04 00:55:28.151448 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-04 00:55:28.151456 | orchestrator | Saturday 04 April 2026 00:53:56 +0000 (0:00:00.494) 0:00:20.444 ******** 2026-04-04 00:55:28.151465 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-04 00:55:28.151472 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-04 00:55:28.151477 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-04 00:55:28.151483 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-04 00:55:28.151489 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-04 00:55:28.151495 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-04 00:55:28.151501 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-04 00:55:28.151507 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-04 00:55:28.151514 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-04 00:55:28.151520 | orchestrator | 2026-04-04 00:55:28.151526 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-04 00:55:28.151533 | orchestrator | Saturday 04 April 2026 00:53:57 +0000 (0:00:00.836) 0:00:21.281 ******** 2026-04-04 00:55:28.151539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-04 00:55:28.151542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-04 00:55:28.151546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-04 00:55:28.151550 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151554 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-04 00:55:28.151557 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-04 00:55:28.151561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-04 00:55:28.151565 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151569 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-04 00:55:28.151573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-04 00:55:28.151576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-04 00:55:28.151580 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151583 | orchestrator | 2026-04-04 00:55:28.151587 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-04 00:55:28.151591 | orchestrator | Saturday 04 April 2026 00:53:57 +0000 (0:00:00.347) 0:00:21.628 ******** 2026-04-04 00:55:28.151595 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 00:55:28.151599 | orchestrator | 2026-04-04 00:55:28.151603 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-04 00:55:28.151607 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.554) 0:00:22.183 ******** 2026-04-04 00:55:28.151615 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151624 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151628 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151632 | orchestrator | 2026-04-04 00:55:28.151635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-04 00:55:28.151639 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.283) 0:00:22.466 ******** 2026-04-04 00:55:28.151643 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151647 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151651 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151654 | orchestrator | 2026-04-04 00:55:28.151658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-04 00:55:28.151662 | orchestrator | Saturday 04 April 2026 00:53:58 +0000 (0:00:00.249) 0:00:22.716 ******** 2026-04-04 00:55:28.151665 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151669 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.151673 | orchestrator | skipping: [testbed-node-5] 2026-04-04 00:55:28.151677 | orchestrator | 2026-04-04 00:55:28.151680 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-04 00:55:28.151684 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.284) 0:00:23.000 ******** 2026-04-04 00:55:28.151688 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.151691 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.151695 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.151699 | orchestrator | 2026-04-04 00:55:28.151703 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-04 00:55:28.151709 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.445) 0:00:23.445 ******** 2026-04-04 00:55:28.151713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:28.151716 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:28.151720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:28.151724 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151728 | orchestrator | 2026-04-04 00:55:28.151732 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-04 00:55:28.151736 | orchestrator | Saturday 04 April 2026 00:53:59 +0000 (0:00:00.344) 0:00:23.790 ******** 2026-04-04 00:55:28.151740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:28.151743 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:28.151747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:28.151751 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151754 | orchestrator | 2026-04-04 00:55:28.151758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-04 00:55:28.151762 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:00.318) 0:00:24.108 ******** 2026-04-04 00:55:28.151766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-04 00:55:28.151769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-04 00:55:28.151773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-04 00:55:28.151777 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.151782 | orchestrator | 2026-04-04 00:55:28.151788 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-04 00:55:28.151794 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:00.333) 0:00:24.441 ******** 2026-04-04 00:55:28.151800 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:55:28.151806 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:55:28.151812 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:55:28.151817 | orchestrator | 2026-04-04 00:55:28.151823 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-04 00:55:28.151863 | orchestrator | Saturday 04 April 2026 00:54:00 +0000 (0:00:00.250) 0:00:24.691 ******** 2026-04-04 00:55:28.151870 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-04 00:55:28.151878 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-04 00:55:28.151887 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-04 00:55:28.151891 | orchestrator | 2026-04-04 00:55:28.151894 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-04 00:55:28.151898 | orchestrator | Saturday 04 April 2026 00:54:01 +0000 (0:00:00.461) 0:00:25.153 ******** 2026-04-04 00:55:28.151902 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:28.151906 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:28.151910 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:28.151939 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:55:28.151944 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:55:28.151949 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:55:28.151953 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:55:28.151957 | orchestrator | 2026-04-04 00:55:28.151962 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-04 00:55:28.151967 | orchestrator | Saturday 04 April 2026 00:54:02 +0000 (0:00:00.846) 0:00:26.000 ******** 2026-04-04 00:55:28.151971 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-04 00:55:28.151976 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-04 00:55:28.151981 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-04 00:55:28.151985 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-04 00:55:28.151990 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-04 00:55:28.151994 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-04 00:55:28.152003 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-04 00:55:28.152008 | orchestrator | 2026-04-04 00:55:28.152013 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-04 00:55:28.152017 | orchestrator | Saturday 04 April 2026 00:54:03 +0000 (0:00:01.548) 0:00:27.548 ******** 2026-04-04 00:55:28.152022 | orchestrator | skipping: [testbed-node-3] 2026-04-04 00:55:28.152026 | orchestrator | skipping: [testbed-node-4] 2026-04-04 00:55:28.152031 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-04 00:55:28.152035 | orchestrator | 2026-04-04 00:55:28.152040 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-04 00:55:28.152044 | orchestrator | Saturday 04 April 2026 00:54:04 +0000 (0:00:00.301) 0:00:27.850 ******** 2026-04-04 00:55:28.152049 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:28.152058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:28.152063 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:28.152067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:28.152075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-04 00:55:28.152080 | orchestrator | 2026-04-04 00:55:28.152084 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-04 00:55:28.152089 | orchestrator | Saturday 04 April 2026 00:54:41 +0000 (0:00:37.879) 0:01:05.729 ******** 2026-04-04 00:55:28.152093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152099 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152105 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152122 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152134 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-04 00:55:28.152140 | orchestrator | 2026-04-04 00:55:28.152147 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-04 00:55:28.152153 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:17.890) 0:01:23.619 ******** 2026-04-04 00:55:28.152159 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152166 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152172 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152178 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152189 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152196 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-04 00:55:28.152202 | orchestrator | 2026-04-04 00:55:28.152208 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-04 00:55:28.152215 | orchestrator | Saturday 04 April 2026 00:55:08 +0000 (0:00:08.459) 0:01:32.079 ******** 2026-04-04 00:55:28.152221 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152228 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152234 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152248 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152261 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152274 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152280 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152288 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152292 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152300 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152304 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152310 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-04 00:55:28.152331 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-04 00:55:28.152338 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-04 00:55:28.152344 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-04 00:55:28.152350 | orchestrator | 2026-04-04 00:55:28.152356 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:55:28.152362 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-04 00:55:28.152370 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-04 00:55:28.152377 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-04 00:55:28.152384 | orchestrator | 2026-04-04 00:55:28.152390 | orchestrator | 2026-04-04 00:55:28.152397 | orchestrator | 2026-04-04 00:55:28.152402 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:55:28.152405 | orchestrator | Saturday 04 April 2026 00:55:24 +0000 (0:00:16.563) 0:01:48.642 ******** 2026-04-04 00:55:28.152409 | orchestrator | =============================================================================== 2026-04-04 00:55:28.152413 | orchestrator | create openstack pool(s) ----------------------------------------------- 37.88s 2026-04-04 00:55:28.152417 | orchestrator | generate keys ---------------------------------------------------------- 17.89s 2026-04-04 00:55:28.152421 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.56s 2026-04-04 00:55:28.152424 | orchestrator | get keys from monitors -------------------------------------------------- 8.46s 2026-04-04 00:55:28.152428 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.11s 2026-04-04 00:55:28.152432 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.63s 2026-04-04 00:55:28.152436 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.55s 2026-04-04 00:55:28.152439 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.39s 2026-04-04 00:55:28.152443 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.93s 2026-04-04 00:55:28.152447 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.85s 2026-04-04 00:55:28.152451 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2026-04-04 00:55:28.152454 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-04-04 00:55:28.152458 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.70s 2026-04-04 00:55:28.152462 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2026-04-04 00:55:28.152466 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.59s 2026-04-04 00:55:28.152469 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.55s 2026-04-04 00:55:28.152473 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.54s 2026-04-04 00:55:28.152477 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.53s 2026-04-04 00:55:28.152481 | orchestrator | ceph-facts : Set_fact discovered_interpreter_python if not previously set --- 0.51s 2026-04-04 00:55:28.152485 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.49s 2026-04-04 00:55:28.152489 | orchestrator | 2026-04-04 00:55:28 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:28.152496 | orchestrator | 2026-04-04 00:55:28 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:28.152500 | orchestrator | 2026-04-04 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:31.200047 | orchestrator | 2026-04-04 00:55:31 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:31.200616 | orchestrator | 2026-04-04 00:55:31 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:31.202053 | orchestrator | 2026-04-04 00:55:31 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:31.202091 | orchestrator | 2026-04-04 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:34.242140 | orchestrator | 2026-04-04 00:55:34 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:34.244359 | orchestrator | 2026-04-04 00:55:34 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:34.245142 | orchestrator | 2026-04-04 00:55:34 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:34.245366 | orchestrator | 2026-04-04 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:37.285297 | orchestrator | 2026-04-04 00:55:37 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:37.287266 | orchestrator | 2026-04-04 00:55:37 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:37.289413 | orchestrator | 2026-04-04 00:55:37 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:37.289489 | orchestrator | 2026-04-04 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:40.355402 | orchestrator | 2026-04-04 00:55:40 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:40.355471 | orchestrator | 2026-04-04 00:55:40 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:40.355483 | orchestrator | 2026-04-04 00:55:40 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:40.355492 | orchestrator | 2026-04-04 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:43.348252 | orchestrator | 2026-04-04 00:55:43 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:43.351263 | orchestrator | 2026-04-04 00:55:43 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:43.353326 | orchestrator | 2026-04-04 00:55:43 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:43.353394 | orchestrator | 2026-04-04 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:46.410748 | orchestrator | 2026-04-04 00:55:46 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:46.412855 | orchestrator | 2026-04-04 00:55:46 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:46.414910 | orchestrator | 2026-04-04 00:55:46 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:46.414964 | orchestrator | 2026-04-04 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:49.465723 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:49.468111 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:49.470182 | orchestrator | 2026-04-04 00:55:49 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:49.470229 | orchestrator | 2026-04-04 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:52.518608 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:52.520495 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:52.522686 | orchestrator | 2026-04-04 00:55:52 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:52.522741 | orchestrator | 2026-04-04 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:55.560568 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:55.564095 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:55.565117 | orchestrator | 2026-04-04 00:55:55 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:55.565148 | orchestrator | 2026-04-04 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:55:58.613773 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:55:58.616056 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:55:58.617684 | orchestrator | 2026-04-04 00:55:58 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:55:58.617723 | orchestrator | 2026-04-04 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:01.674134 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:01.675839 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state STARTED 2026-04-04 00:56:01.677462 | orchestrator | 2026-04-04 00:56:01 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:01.677846 | orchestrator | 2026-04-04 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:04.729344 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:04.730413 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:04.731797 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task a84fa0e2-d347-4fca-8760-0cdf704b70e6 is in state SUCCESS 2026-04-04 00:56:04.733259 | orchestrator | 2026-04-04 00:56:04 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:04.733613 | orchestrator | 2026-04-04 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:07.776267 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:07.778328 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:07.780548 | orchestrator | 2026-04-04 00:56:07 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:07.781045 | orchestrator | 2026-04-04 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:10.825624 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:10.826622 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:10.828939 | orchestrator | 2026-04-04 00:56:10 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:10.829558 | orchestrator | 2026-04-04 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:13.876460 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:13.879126 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:13.882327 | orchestrator | 2026-04-04 00:56:13 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:13.882396 | orchestrator | 2026-04-04 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:16.932294 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:16.934636 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:16.936077 | orchestrator | 2026-04-04 00:56:16 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:16.936139 | orchestrator | 2026-04-04 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:19.981215 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:19.981304 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:19.982308 | orchestrator | 2026-04-04 00:56:19 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:19.982343 | orchestrator | 2026-04-04 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:23.022642 | orchestrator | 2026-04-04 00:56:23 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state STARTED 2026-04-04 00:56:23.025536 | orchestrator | 2026-04-04 00:56:23 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:23.028411 | orchestrator | 2026-04-04 00:56:23 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:23.030529 | orchestrator | 2026-04-04 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:26.062626 | orchestrator | 2026-04-04 00:56:26 | INFO  | Task e95e73e0-9cad-4590-b7cd-a4269d6d1056 is in state SUCCESS 2026-04-04 00:56:26.063620 | orchestrator | 2026-04-04 00:56:26.063682 | orchestrator | 2026-04-04 00:56:26.063691 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-04 00:56:26.063699 | orchestrator | 2026-04-04 00:56:26.063706 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-04 00:56:26.063714 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-04-04 00:56:26.063721 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-04 00:56:26.063729 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.063822 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.063829 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:56:26.063836 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.063843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-04 00:56:26.063865 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-04 00:56:26.063991 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:56:26.063999 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-04 00:56:26.064005 | orchestrator | 2026-04-04 00:56:26.064012 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-04 00:56:26.064018 | orchestrator | Saturday 04 April 2026 00:55:32 +0000 (0:00:04.471) 0:00:04.701 ******** 2026-04-04 00:56:26.064024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-04 00:56:26.064030 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064070 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:56:26.064082 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-04 00:56:26.064094 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-04 00:56:26.064100 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:56:26.064106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-04 00:56:26.064271 | orchestrator | 2026-04-04 00:56:26.064281 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-04 00:56:26.064287 | orchestrator | Saturday 04 April 2026 00:55:37 +0000 (0:00:04.572) 0:00:09.273 ******** 2026-04-04 00:56:26.064294 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-04 00:56:26.064300 | orchestrator | 2026-04-04 00:56:26.064306 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-04 00:56:26.064312 | orchestrator | Saturday 04 April 2026 00:55:38 +0000 (0:00:00.917) 0:00:10.191 ******** 2026-04-04 00:56:26.064319 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-04 00:56:26.064326 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064332 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064339 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:56:26.064346 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064352 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-04 00:56:26.064358 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-04 00:56:26.064364 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:56:26.064371 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-04 00:56:26.064377 | orchestrator | 2026-04-04 00:56:26.064383 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-04 00:56:26.064391 | orchestrator | Saturday 04 April 2026 00:55:52 +0000 (0:00:13.563) 0:00:23.754 ******** 2026-04-04 00:56:26.064396 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-04 00:56:26.064403 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-04 00:56:26.064410 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-04 00:56:26.064417 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-04 00:56:26.064447 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-04 00:56:26.064453 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-04 00:56:26.064460 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-04 00:56:26.064467 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-04 00:56:26.064473 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-04 00:56:26.064479 | orchestrator | 2026-04-04 00:56:26.064485 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-04 00:56:26.064492 | orchestrator | Saturday 04 April 2026 00:55:55 +0000 (0:00:03.407) 0:00:27.162 ******** 2026-04-04 00:56:26.064500 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-04 00:56:26.064506 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064513 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 00:56:26.064535 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-04 00:56:26.064541 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-04 00:56:26.064548 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-04 00:56:26.064554 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-04 00:56:26.064560 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-04 00:56:26.064567 | orchestrator | 2026-04-04 00:56:26.064573 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:56:26.064579 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:56:26.064586 | orchestrator | 2026-04-04 00:56:26.064593 | orchestrator | 2026-04-04 00:56:26.064599 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:56:26.064605 | orchestrator | Saturday 04 April 2026 00:56:02 +0000 (0:00:06.813) 0:00:33.975 ******** 2026-04-04 00:56:26.064613 | orchestrator | =============================================================================== 2026-04-04 00:56:26.064619 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.56s 2026-04-04 00:56:26.064625 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.81s 2026-04-04 00:56:26.064631 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.57s 2026-04-04 00:56:26.064637 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.47s 2026-04-04 00:56:26.064643 | orchestrator | Check if target directories exist --------------------------------------- 3.41s 2026-04-04 00:56:26.064649 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2026-04-04 00:56:26.064656 | orchestrator | 2026-04-04 00:56:26.064663 | orchestrator | 2026-04-04 00:56:26.064669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:56:26.064675 | orchestrator | 2026-04-04 00:56:26.064681 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:56:26.064688 | orchestrator | Saturday 04 April 2026 00:54:53 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-04 00:56:26.064694 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.064701 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.064707 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.064714 | orchestrator | 2026-04-04 00:56:26.064720 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:56:26.064727 | orchestrator | Saturday 04 April 2026 00:54:53 +0000 (0:00:00.240) 0:00:00.514 ******** 2026-04-04 00:56:26.064771 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-04 00:56:26.064782 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-04 00:56:26.064789 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-04 00:56:26.064796 | orchestrator | 2026-04-04 00:56:26.064802 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-04 00:56:26.064809 | orchestrator | 2026-04-04 00:56:26.064816 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:56:26.064823 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:00.264) 0:00:00.779 ******** 2026-04-04 00:56:26.064830 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:56:26.064837 | orchestrator | 2026-04-04 00:56:26.064843 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-04 00:56:26.064851 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:00.500) 0:00:01.280 ******** 2026-04-04 00:56:26.064928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.064944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.064973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.064982 | orchestrator | 2026-04-04 00:56:26.064990 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-04 00:56:26.064997 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:01.354) 0:00:02.634 ******** 2026-04-04 00:56:26.065004 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065011 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065019 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065026 | orchestrator | 2026-04-04 00:56:26.065034 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:56:26.065049 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:00.253) 0:00:02.888 ******** 2026-04-04 00:56:26.065055 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:56:26.065061 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:56:26.065068 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:56:26.065075 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:56:26.065081 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:56:26.065088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:56:26.065095 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:56:26.065101 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:56:26.065107 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:56:26.065113 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:56:26.065120 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:56:26.065127 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:56:26.065133 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:56:26.065141 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:56:26.065147 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:56:26.065154 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:56:26.065161 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-04 00:56:26.065167 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-04 00:56:26.065174 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-04 00:56:26.065180 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-04 00:56:26.065187 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-04 00:56:26.065194 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-04 00:56:26.065206 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-04 00:56:26.065212 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-04 00:56:26.065220 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-04 00:56:26.065230 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-04 00:56:26.065237 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-04 00:56:26.065244 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-04 00:56:26.065256 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-04 00:56:26.065263 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-04 00:56:26.065274 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-04 00:56:26.065281 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-04 00:56:26.065287 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-04 00:56:26.065294 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-04 00:56:26.065301 | orchestrator | 2026-04-04 00:56:26.065308 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065315 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:00.645) 0:00:03.533 ******** 2026-04-04 00:56:26.065322 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065329 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065334 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065340 | orchestrator | 2026-04-04 00:56:26.065346 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065352 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:00.381) 0:00:03.915 ******** 2026-04-04 00:56:26.065358 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065364 | orchestrator | 2026-04-04 00:56:26.065370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065376 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:00.130) 0:00:04.045 ******** 2026-04-04 00:56:26.065383 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065389 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065396 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065402 | orchestrator | 2026-04-04 00:56:26.065408 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065415 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:00.316) 0:00:04.362 ******** 2026-04-04 00:56:26.065421 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065428 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065434 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065441 | orchestrator | 2026-04-04 00:56:26.065447 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065453 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.323) 0:00:04.685 ******** 2026-04-04 00:56:26.065459 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065465 | orchestrator | 2026-04-04 00:56:26.065472 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065478 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.117) 0:00:04.802 ******** 2026-04-04 00:56:26.065484 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065490 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065497 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065503 | orchestrator | 2026-04-04 00:56:26.065509 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065516 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.543) 0:00:05.346 ******** 2026-04-04 00:56:26.065523 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065530 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065536 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065542 | orchestrator | 2026-04-04 00:56:26.065548 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065555 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:00.352) 0:00:05.699 ******** 2026-04-04 00:56:26.065561 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065568 | orchestrator | 2026-04-04 00:56:26.065574 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065586 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:00.126) 0:00:05.826 ******** 2026-04-04 00:56:26.065593 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065599 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065606 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065612 | orchestrator | 2026-04-04 00:56:26.065618 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065629 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:00.277) 0:00:06.104 ******** 2026-04-04 00:56:26.065636 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065643 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065650 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065656 | orchestrator | 2026-04-04 00:56:26.065663 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065670 | orchestrator | Saturday 04 April 2026 00:54:59 +0000 (0:00:00.300) 0:00:06.404 ******** 2026-04-04 00:56:26.065677 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065683 | orchestrator | 2026-04-04 00:56:26.065689 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065696 | orchestrator | Saturday 04 April 2026 00:55:00 +0000 (0:00:00.136) 0:00:06.540 ******** 2026-04-04 00:56:26.065702 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065709 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065714 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065721 | orchestrator | 2026-04-04 00:56:26.065728 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065750 | orchestrator | Saturday 04 April 2026 00:55:00 +0000 (0:00:00.462) 0:00:07.003 ******** 2026-04-04 00:56:26.065757 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065763 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065769 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065776 | orchestrator | 2026-04-04 00:56:26.065786 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065792 | orchestrator | Saturday 04 April 2026 00:55:00 +0000 (0:00:00.300) 0:00:07.304 ******** 2026-04-04 00:56:26.065799 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065805 | orchestrator | 2026-04-04 00:56:26.065812 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065818 | orchestrator | Saturday 04 April 2026 00:55:00 +0000 (0:00:00.120) 0:00:07.424 ******** 2026-04-04 00:56:26.065824 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065830 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065837 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065842 | orchestrator | 2026-04-04 00:56:26.065848 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065855 | orchestrator | Saturday 04 April 2026 00:55:01 +0000 (0:00:00.273) 0:00:07.697 ******** 2026-04-04 00:56:26.065861 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065867 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065873 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065880 | orchestrator | 2026-04-04 00:56:26.065886 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.065892 | orchestrator | Saturday 04 April 2026 00:55:01 +0000 (0:00:00.515) 0:00:08.213 ******** 2026-04-04 00:56:26.065898 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065904 | orchestrator | 2026-04-04 00:56:26.065911 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.065917 | orchestrator | Saturday 04 April 2026 00:55:01 +0000 (0:00:00.153) 0:00:08.366 ******** 2026-04-04 00:56:26.065923 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.065930 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.065936 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.065942 | orchestrator | 2026-04-04 00:56:26.065949 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.065955 | orchestrator | Saturday 04 April 2026 00:55:02 +0000 (0:00:00.306) 0:00:08.672 ******** 2026-04-04 00:56:26.065967 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.065974 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.065980 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.065987 | orchestrator | 2026-04-04 00:56:26.065994 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.066000 | orchestrator | Saturday 04 April 2026 00:55:02 +0000 (0:00:00.297) 0:00:08.969 ******** 2026-04-04 00:56:26.066006 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066076 | orchestrator | 2026-04-04 00:56:26.066087 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.066095 | orchestrator | Saturday 04 April 2026 00:55:02 +0000 (0:00:00.137) 0:00:09.107 ******** 2026-04-04 00:56:26.066102 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066108 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066115 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066122 | orchestrator | 2026-04-04 00:56:26.066129 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.066136 | orchestrator | Saturday 04 April 2026 00:55:02 +0000 (0:00:00.270) 0:00:09.377 ******** 2026-04-04 00:56:26.066143 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.066149 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.066156 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.066162 | orchestrator | 2026-04-04 00:56:26.066169 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.066176 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.464) 0:00:09.841 ******** 2026-04-04 00:56:26.066183 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066190 | orchestrator | 2026-04-04 00:56:26.066197 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.066204 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.115) 0:00:09.957 ******** 2026-04-04 00:56:26.066210 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066216 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066223 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066230 | orchestrator | 2026-04-04 00:56:26.066236 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.066386 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.266) 0:00:10.224 ******** 2026-04-04 00:56:26.066393 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.066400 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.066406 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.066412 | orchestrator | 2026-04-04 00:56:26.066418 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.066425 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.333) 0:00:10.557 ******** 2026-04-04 00:56:26.066432 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066437 | orchestrator | 2026-04-04 00:56:26.066452 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.066460 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.111) 0:00:10.669 ******** 2026-04-04 00:56:26.066466 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066472 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066478 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066484 | orchestrator | 2026-04-04 00:56:26.066490 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-04 00:56:26.066497 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.310) 0:00:10.979 ******** 2026-04-04 00:56:26.066503 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:56:26.066509 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:56:26.066516 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:56:26.066522 | orchestrator | 2026-04-04 00:56:26.066527 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-04 00:56:26.066534 | orchestrator | Saturday 04 April 2026 00:55:04 +0000 (0:00:00.495) 0:00:11.475 ******** 2026-04-04 00:56:26.066556 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066562 | orchestrator | 2026-04-04 00:56:26.066569 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-04 00:56:26.066575 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.140) 0:00:11.615 ******** 2026-04-04 00:56:26.066582 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066588 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066600 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066606 | orchestrator | 2026-04-04 00:56:26.066613 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-04 00:56:26.066619 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:00.282) 0:00:11.898 ******** 2026-04-04 00:56:26.066625 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:56:26.066631 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:56:26.066637 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:56:26.066643 | orchestrator | 2026-04-04 00:56:26.066650 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-04 00:56:26.066656 | orchestrator | Saturday 04 April 2026 00:55:06 +0000 (0:00:01.487) 0:00:13.385 ******** 2026-04-04 00:56:26.066661 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:56:26.066668 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:56:26.066674 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-04 00:56:26.066680 | orchestrator | 2026-04-04 00:56:26.066686 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-04 00:56:26.066692 | orchestrator | Saturday 04 April 2026 00:55:08 +0000 (0:00:02.050) 0:00:15.436 ******** 2026-04-04 00:56:26.066698 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:56:26.066706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:56:26.066711 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-04 00:56:26.066718 | orchestrator | 2026-04-04 00:56:26.066724 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-04 00:56:26.066730 | orchestrator | Saturday 04 April 2026 00:55:11 +0000 (0:00:02.320) 0:00:17.756 ******** 2026-04-04 00:56:26.066779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:56:26.066786 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:56:26.066792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-04 00:56:26.066798 | orchestrator | 2026-04-04 00:56:26.066804 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-04 00:56:26.066810 | orchestrator | Saturday 04 April 2026 00:55:12 +0000 (0:00:01.516) 0:00:19.272 ******** 2026-04-04 00:56:26.066816 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066823 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066829 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066836 | orchestrator | 2026-04-04 00:56:26.066842 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-04 00:56:26.066848 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.298) 0:00:19.570 ******** 2026-04-04 00:56:26.066855 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.066861 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.066868 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.066874 | orchestrator | 2026-04-04 00:56:26.066880 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:56:26.066887 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.279) 0:00:19.850 ******** 2026-04-04 00:56:26.066893 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:56:26.066907 | orchestrator | 2026-04-04 00:56:26.066914 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-04 00:56:26.066920 | orchestrator | Saturday 04 April 2026 00:55:14 +0000 (0:00:00.786) 0:00:20.636 ******** 2026-04-04 00:56:26.066945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.066957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.066985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.066992 | orchestrator | 2026-04-04 00:56:26.066999 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-04 00:56:26.067005 | orchestrator | Saturday 04 April 2026 00:55:15 +0000 (0:00:01.615) 0:00:22.252 ******** 2026-04-04 00:56:26.067019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067033 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.067047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067055 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.067070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067110 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.067119 | orchestrator | 2026-04-04 00:56:26.067126 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-04 00:56:26.067133 | orchestrator | Saturday 04 April 2026 00:55:16 +0000 (0:00:00.863) 0:00:23.115 ******** 2026-04-04 00:56:26.067146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067153 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.067174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067181 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.067192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-04 00:56:26.067204 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.067212 | orchestrator | 2026-04-04 00:56:26.067218 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-04 00:56:26.067225 | orchestrator | Saturday 04 April 2026 00:55:17 +0000 (0:00:01.098) 0:00:24.214 ******** 2026-04-04 00:56:26.067241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.067250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.067276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-04 00:56:26.067284 | orchestrator | 2026-04-04 00:56:26.067291 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:56:26.067298 | orchestrator | Saturday 04 April 2026 00:55:19 +0000 (0:00:01.497) 0:00:25.711 ******** 2026-04-04 00:56:26.067304 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:56:26.067311 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:56:26.067318 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:56:26.067325 | orchestrator | 2026-04-04 00:56:26.067331 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-04 00:56:26.067338 | orchestrator | Saturday 04 April 2026 00:55:19 +0000 (0:00:00.305) 0:00:26.017 ******** 2026-04-04 00:56:26.067345 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:56:26.067351 | orchestrator | 2026-04-04 00:56:26.067358 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-04 00:56:26.067371 | orchestrator | Saturday 04 April 2026 00:55:20 +0000 (0:00:00.735) 0:00:26.752 ******** 2026-04-04 00:56:26.067378 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:56:26.067385 | orchestrator | 2026-04-04 00:56:26.067391 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-04 00:56:26.067398 | orchestrator | Saturday 04 April 2026 00:55:22 +0000 (0:00:02.142) 0:00:28.895 ******** 2026-04-04 00:56:26.067404 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:56:26.067411 | orchestrator | 2026-04-04 00:56:26.067417 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-04 00:56:26.067423 | orchestrator | Saturday 04 April 2026 00:55:24 +0000 (0:00:02.085) 0:00:30.981 ******** 2026-04-04 00:56:26.067430 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:56:26.067437 | orchestrator | 2026-04-04 00:56:26.067443 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:56:26.067450 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:15.463) 0:00:46.444 ******** 2026-04-04 00:56:26.067456 | orchestrator | 2026-04-04 00:56:26.067463 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:56:26.067469 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:00.059) 0:00:46.504 ******** 2026-04-04 00:56:26.067476 | orchestrator | 2026-04-04 00:56:26.067482 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-04 00:56:26.067489 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.059) 0:00:46.563 ******** 2026-04-04 00:56:26.067496 | orchestrator | 2026-04-04 00:56:26.067503 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-04 00:56:26.067510 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.060) 0:00:46.624 ******** 2026-04-04 00:56:26.067516 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:56:26.067523 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:56:26.067530 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:56:26.067537 | orchestrator | 2026-04-04 00:56:26.067543 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:56:26.067549 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-04 00:56:26.067558 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-04 00:56:26.067564 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-04 00:56:26.067571 | orchestrator | 2026-04-04 00:56:26.067578 | orchestrator | 2026-04-04 00:56:26.067589 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:56:26.067596 | orchestrator | Saturday 04 April 2026 00:56:24 +0000 (0:00:44.639) 0:01:31.263 ******** 2026-04-04 00:56:26.067602 | orchestrator | =============================================================================== 2026-04-04 00:56:26.067608 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.64s 2026-04-04 00:56:26.067615 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.46s 2026-04-04 00:56:26.067621 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.32s 2026-04-04 00:56:26.067627 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.14s 2026-04-04 00:56:26.067634 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.09s 2026-04-04 00:56:26.067641 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.05s 2026-04-04 00:56:26.067647 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.62s 2026-04-04 00:56:26.067653 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.52s 2026-04-04 00:56:26.067660 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.50s 2026-04-04 00:56:26.067675 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.49s 2026-04-04 00:56:26.067683 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.35s 2026-04-04 00:56:26.067690 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.10s 2026-04-04 00:56:26.067696 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.86s 2026-04-04 00:56:26.067702 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-04-04 00:56:26.067709 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.73s 2026-04-04 00:56:26.067715 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-04-04 00:56:26.067722 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2026-04-04 00:56:26.067729 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-04-04 00:56:26.067778 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-04-04 00:56:26.067786 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-04-04 00:56:26.067792 | orchestrator | 2026-04-04 00:56:26 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:26.067798 | orchestrator | 2026-04-04 00:56:26 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:26.067805 | orchestrator | 2026-04-04 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:29.105948 | orchestrator | 2026-04-04 00:56:29 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:29.107529 | orchestrator | 2026-04-04 00:56:29 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:29.107582 | orchestrator | 2026-04-04 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:32.155132 | orchestrator | 2026-04-04 00:56:32 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:32.157396 | orchestrator | 2026-04-04 00:56:32 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:32.157460 | orchestrator | 2026-04-04 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:35.199712 | orchestrator | 2026-04-04 00:56:35 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:35.201437 | orchestrator | 2026-04-04 00:56:35 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:35.201473 | orchestrator | 2026-04-04 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:38.243598 | orchestrator | 2026-04-04 00:56:38 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:38.245071 | orchestrator | 2026-04-04 00:56:38 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:38.245107 | orchestrator | 2026-04-04 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:41.291097 | orchestrator | 2026-04-04 00:56:41 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:41.292038 | orchestrator | 2026-04-04 00:56:41 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:41.292071 | orchestrator | 2026-04-04 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:44.329070 | orchestrator | 2026-04-04 00:56:44 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:44.330526 | orchestrator | 2026-04-04 00:56:44 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:44.330601 | orchestrator | 2026-04-04 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:47.381843 | orchestrator | 2026-04-04 00:56:47 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:47.384321 | orchestrator | 2026-04-04 00:56:47 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:47.384381 | orchestrator | 2026-04-04 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:50.427617 | orchestrator | 2026-04-04 00:56:50 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:50.429583 | orchestrator | 2026-04-04 00:56:50 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:50.429644 | orchestrator | 2026-04-04 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:53.472563 | orchestrator | 2026-04-04 00:56:53 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:53.475133 | orchestrator | 2026-04-04 00:56:53 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:53.475197 | orchestrator | 2026-04-04 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:56.515283 | orchestrator | 2026-04-04 00:56:56 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state STARTED 2026-04-04 00:56:56.516846 | orchestrator | 2026-04-04 00:56:56 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:56.516929 | orchestrator | 2026-04-04 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:56:59.555820 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task cb9710ef-98c0-43a7-9fb2-d71bdae68282 is in state SUCCESS 2026-04-04 00:56:59.556237 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:56:59.557944 | orchestrator | 2026-04-04 00:56:59 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:56:59.558057 | orchestrator | 2026-04-04 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:02.610479 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:02.610956 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:02.611533 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task 342a22da-2b92-4b68-b2df-923a0cb50253 is in state STARTED 2026-04-04 00:57:02.614816 | orchestrator | 2026-04-04 00:57:02 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:02.614872 | orchestrator | 2026-04-04 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:05.645754 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:05.648823 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:05.652504 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:05.653639 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:05.654308 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task 342a22da-2b92-4b68-b2df-923a0cb50253 is in state SUCCESS 2026-04-04 00:57:05.655173 | orchestrator | 2026-04-04 00:57:05 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:05.657694 | orchestrator | 2026-04-04 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:08.685689 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:08.685872 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:08.688414 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:08.690575 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:08.692615 | orchestrator | 2026-04-04 00:57:08 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:08.692701 | orchestrator | 2026-04-04 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:11.720520 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:11.720729 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:11.721571 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:11.722134 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:11.723127 | orchestrator | 2026-04-04 00:57:11 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:11.723176 | orchestrator | 2026-04-04 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:14.763701 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:14.765871 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:14.767356 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:14.768714 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:14.770443 | orchestrator | 2026-04-04 00:57:14 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:14.770485 | orchestrator | 2026-04-04 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:17.815120 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:17.815390 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:17.816177 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:17.816999 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:17.817690 | orchestrator | 2026-04-04 00:57:17 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:17.817717 | orchestrator | 2026-04-04 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:20.860434 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:20.862872 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:20.865635 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:20.871343 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:20.872699 | orchestrator | 2026-04-04 00:57:20 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:20.872759 | orchestrator | 2026-04-04 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:23.910098 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:23.911154 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:23.914325 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:23.917081 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:23.919530 | orchestrator | 2026-04-04 00:57:23 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:23.919572 | orchestrator | 2026-04-04 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:26.963528 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:26.963618 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:26.964382 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:26.965694 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:26.966561 | orchestrator | 2026-04-04 00:57:26 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:26.966605 | orchestrator | 2026-04-04 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:30.021882 | orchestrator | 2026-04-04 00:57:30 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:30.022586 | orchestrator | 2026-04-04 00:57:30 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:30.025048 | orchestrator | 2026-04-04 00:57:30 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:30.026074 | orchestrator | 2026-04-04 00:57:30 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:30.026915 | orchestrator | 2026-04-04 00:57:30 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state STARTED 2026-04-04 00:57:30.026955 | orchestrator | 2026-04-04 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:33.092149 | orchestrator | 2026-04-04 00:57:33 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:33.092403 | orchestrator | 2026-04-04 00:57:33 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:33.093558 | orchestrator | 2026-04-04 00:57:33 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:33.095158 | orchestrator | 2026-04-04 00:57:33 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:33.096940 | orchestrator | 2026-04-04 00:57:33 | INFO  | Task 230ceebf-1188-4101-9f36-67e7524cc4ef is in state SUCCESS 2026-04-04 00:57:33.099754 | orchestrator | 2026-04-04 00:57:33.099815 | orchestrator | 2026-04-04 00:57:33.099823 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-04 00:57:33.099828 | orchestrator | 2026-04-04 00:57:33.099832 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-04 00:57:33.099837 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-04-04 00:57:33.099841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-04 00:57:33.099867 | orchestrator | 2026-04-04 00:57:33.099871 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-04 00:57:33.099875 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:00.227) 0:00:00.502 ******** 2026-04-04 00:57:33.099880 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-04 00:57:33.099885 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-04 00:57:33.099889 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-04 00:57:33.099893 | orchestrator | 2026-04-04 00:57:33.099897 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-04 00:57:33.099901 | orchestrator | Saturday 04 April 2026 00:56:07 +0000 (0:00:01.375) 0:00:01.877 ******** 2026-04-04 00:57:33.099905 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-04 00:57:33.099909 | orchestrator | 2026-04-04 00:57:33.099913 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-04 00:57:33.099917 | orchestrator | Saturday 04 April 2026 00:56:08 +0000 (0:00:01.196) 0:00:03.074 ******** 2026-04-04 00:57:33.099921 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:33.099925 | orchestrator | 2026-04-04 00:57:33.099929 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-04 00:57:33.099933 | orchestrator | Saturday 04 April 2026 00:56:09 +0000 (0:00:00.883) 0:00:03.958 ******** 2026-04-04 00:57:33.099936 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:33.099940 | orchestrator | 2026-04-04 00:57:33.099944 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-04 00:57:33.099948 | orchestrator | Saturday 04 April 2026 00:56:10 +0000 (0:00:00.896) 0:00:04.854 ******** 2026-04-04 00:57:33.099952 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-04 00:57:33.099956 | orchestrator | ok: [testbed-manager] 2026-04-04 00:57:33.099960 | orchestrator | 2026-04-04 00:57:33.099963 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-04 00:57:33.099967 | orchestrator | Saturday 04 April 2026 00:56:49 +0000 (0:00:38.813) 0:00:43.668 ******** 2026-04-04 00:57:33.099971 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-04 00:57:33.099976 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-04 00:57:33.099979 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-04 00:57:33.099983 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-04 00:57:33.099987 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-04 00:57:33.099991 | orchestrator | 2026-04-04 00:57:33.099994 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-04 00:57:33.099998 | orchestrator | Saturday 04 April 2026 00:56:53 +0000 (0:00:03.999) 0:00:47.667 ******** 2026-04-04 00:57:33.100002 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-04 00:57:33.100006 | orchestrator | 2026-04-04 00:57:33.100010 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-04 00:57:33.100014 | orchestrator | Saturday 04 April 2026 00:56:53 +0000 (0:00:00.587) 0:00:48.255 ******** 2026-04-04 00:57:33.100018 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:57:33.100022 | orchestrator | 2026-04-04 00:57:33.100025 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-04 00:57:33.100029 | orchestrator | Saturday 04 April 2026 00:56:53 +0000 (0:00:00.124) 0:00:48.379 ******** 2026-04-04 00:57:33.100033 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:57:33.100037 | orchestrator | 2026-04-04 00:57:33.100040 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-04 00:57:33.100044 | orchestrator | Saturday 04 April 2026 00:56:54 +0000 (0:00:00.310) 0:00:48.690 ******** 2026-04-04 00:57:33.100048 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:33.100089 | orchestrator | 2026-04-04 00:57:33.100094 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-04 00:57:33.100098 | orchestrator | Saturday 04 April 2026 00:56:55 +0000 (0:00:01.343) 0:00:50.033 ******** 2026-04-04 00:57:33.100102 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:33.100105 | orchestrator | 2026-04-04 00:57:33.100109 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-04 00:57:33.100113 | orchestrator | Saturday 04 April 2026 00:56:56 +0000 (0:00:00.687) 0:00:50.721 ******** 2026-04-04 00:57:33.100117 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:33.100121 | orchestrator | 2026-04-04 00:57:33.100124 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-04 00:57:33.100128 | orchestrator | Saturday 04 April 2026 00:56:56 +0000 (0:00:00.616) 0:00:51.338 ******** 2026-04-04 00:57:33.100132 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-04 00:57:33.100136 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-04 00:57:33.100140 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-04 00:57:33.100144 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-04 00:57:33.100147 | orchestrator | 2026-04-04 00:57:33.100163 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:33.100167 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 00:57:33.100172 | orchestrator | 2026-04-04 00:57:33.100176 | orchestrator | 2026-04-04 00:57:33.100471 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:33.100486 | orchestrator | Saturday 04 April 2026 00:56:58 +0000 (0:00:01.437) 0:00:52.776 ******** 2026-04-04 00:57:33.100492 | orchestrator | =============================================================================== 2026-04-04 00:57:33.100496 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.81s 2026-04-04 00:57:33.100501 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.00s 2026-04-04 00:57:33.100505 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.44s 2026-04-04 00:57:33.100510 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.38s 2026-04-04 00:57:33.100514 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.34s 2026-04-04 00:57:33.100518 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.20s 2026-04-04 00:57:33.100523 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.90s 2026-04-04 00:57:33.100527 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-04-04 00:57:33.100532 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-04-04 00:57:33.100536 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-04-04 00:57:33.100541 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.59s 2026-04-04 00:57:33.100545 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2026-04-04 00:57:33.100550 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-04-04 00:57:33.100554 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-04-04 00:57:33.100558 | orchestrator | 2026-04-04 00:57:33.100563 | orchestrator | 2026-04-04 00:57:33.100568 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:57:33.100572 | orchestrator | 2026-04-04 00:57:33.100576 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:57:33.100580 | orchestrator | Saturday 04 April 2026 00:57:01 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-04-04 00:57:33.100585 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.100590 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.100594 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.100599 | orchestrator | 2026-04-04 00:57:33.100609 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:57:33.100614 | orchestrator | Saturday 04 April 2026 00:57:02 +0000 (0:00:00.344) 0:00:00.537 ******** 2026-04-04 00:57:33.100619 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-04 00:57:33.100654 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-04 00:57:33.100659 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-04 00:57:33.100665 | orchestrator | 2026-04-04 00:57:33.100672 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-04-04 00:57:33.100678 | orchestrator | 2026-04-04 00:57:33.100684 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-04-04 00:57:33.100692 | orchestrator | Saturday 04 April 2026 00:57:02 +0000 (0:00:00.482) 0:00:01.019 ******** 2026-04-04 00:57:33.100698 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.100703 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.100709 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.100716 | orchestrator | 2026-04-04 00:57:33.100722 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:33.100731 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:33.100738 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:33.100744 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:33.100750 | orchestrator | 2026-04-04 00:57:33.100757 | orchestrator | 2026-04-04 00:57:33.100763 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:33.100769 | orchestrator | Saturday 04 April 2026 00:57:03 +0000 (0:00:01.060) 0:00:02.079 ******** 2026-04-04 00:57:33.100776 | orchestrator | =============================================================================== 2026-04-04 00:57:33.100783 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.06s 2026-04-04 00:57:33.100789 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-04-04 00:57:33.100796 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-04-04 00:57:33.100802 | orchestrator | 2026-04-04 00:57:33.100809 | orchestrator | 2026-04-04 00:57:33.100816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:57:33.100823 | orchestrator | 2026-04-04 00:57:33.100830 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:57:33.100836 | orchestrator | Saturday 04 April 2026 00:54:53 +0000 (0:00:00.274) 0:00:00.274 ******** 2026-04-04 00:57:33.100843 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.100850 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.100855 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.100859 | orchestrator | 2026-04-04 00:57:33.100862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:57:33.100873 | orchestrator | Saturday 04 April 2026 00:54:53 +0000 (0:00:00.302) 0:00:00.576 ******** 2026-04-04 00:57:33.100877 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-04 00:57:33.100881 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-04 00:57:33.100885 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-04 00:57:33.100889 | orchestrator | 2026-04-04 00:57:33.100893 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-04 00:57:33.100896 | orchestrator | 2026-04-04 00:57:33.100925 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.100929 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:00.273) 0:00:00.850 ******** 2026-04-04 00:57:33.100934 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:33.100946 | orchestrator | 2026-04-04 00:57:33.100952 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-04 00:57:33.100962 | orchestrator | Saturday 04 April 2026 00:54:54 +0000 (0:00:00.546) 0:00:01.397 ******** 2026-04-04 00:57:33.100973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.100983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.100990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101080 | orchestrator | 2026-04-04 00:57:33.101086 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-04 00:57:33.101092 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:01.907) 0:00:03.305 ******** 2026-04-04 00:57:33.101097 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101103 | orchestrator | 2026-04-04 00:57:33.101109 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-04 00:57:33.101115 | orchestrator | Saturday 04 April 2026 00:54:56 +0000 (0:00:00.108) 0:00:03.413 ******** 2026-04-04 00:57:33.101121 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101127 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101133 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101139 | orchestrator | 2026-04-04 00:57:33.101145 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-04 00:57:33.101152 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:00.251) 0:00:03.664 ******** 2026-04-04 00:57:33.101164 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:57:33.101171 | orchestrator | 2026-04-04 00:57:33.101177 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.101193 | orchestrator | Saturday 04 April 2026 00:54:57 +0000 (0:00:00.855) 0:00:04.520 ******** 2026-04-04 00:57:33.101201 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:33.101207 | orchestrator | 2026-04-04 00:57:33.101214 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-04 00:57:33.101225 | orchestrator | Saturday 04 April 2026 00:54:58 +0000 (0:00:00.676) 0:00:05.196 ******** 2026-04-04 00:57:33.101232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101309 | orchestrator | 2026-04-04 00:57:33.101316 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-04 00:57:33.101322 | orchestrator | Saturday 04 April 2026 00:55:01 +0000 (0:00:03.051) 0:00:08.247 ******** 2026-04-04 00:57:33.101332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101365 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101397 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101432 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101438 | orchestrator | 2026-04-04 00:57:33.101445 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-04 00:57:33.101452 | orchestrator | Saturday 04 April 2026 00:55:02 +0000 (0:00:00.609) 0:00:08.857 ******** 2026-04-04 00:57:33.101459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101486 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101517 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101546 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101550 | orchestrator | 2026-04-04 00:57:33.101553 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-04 00:57:33.101557 | orchestrator | Saturday 04 April 2026 00:55:03 +0000 (0:00:00.904) 0:00:09.762 ******** 2026-04-04 00:57:33.101561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101662 | orchestrator | 2026-04-04 00:57:33.101668 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-04 00:57:33.101674 | orchestrator | Saturday 04 April 2026 00:55:05 +0000 (0:00:02.865) 0:00:12.627 ******** 2026-04-04 00:57:33.101689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.101727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.101750 | orchestrator | 2026-04-04 00:57:33.101754 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-04 00:57:33.101758 | orchestrator | Saturday 04 April 2026 00:55:10 +0000 (0:00:04.941) 0:00:17.568 ******** 2026-04-04 00:57:33.101762 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.101766 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:33.101770 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:33.101773 | orchestrator | 2026-04-04 00:57:33.101777 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-04 00:57:33.101781 | orchestrator | Saturday 04 April 2026 00:55:12 +0000 (0:00:01.406) 0:00:18.975 ******** 2026-04-04 00:57:33.101785 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101788 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101792 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101796 | orchestrator | 2026-04-04 00:57:33.101800 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-04 00:57:33.101805 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.908) 0:00:19.883 ******** 2026-04-04 00:57:33.101811 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101819 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101828 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101834 | orchestrator | 2026-04-04 00:57:33.101839 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-04 00:57:33.101845 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.274) 0:00:20.158 ******** 2026-04-04 00:57:33.101851 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101856 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101862 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101867 | orchestrator | 2026-04-04 00:57:33.101873 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-04 00:57:33.101878 | orchestrator | Saturday 04 April 2026 00:55:13 +0000 (0:00:00.253) 0:00:20.411 ******** 2026-04-04 00:57:33.101888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101917 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.101923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101943 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.101958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-04 00:57:33.101965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-04 00:57:33.101976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-04 00:57:33.101983 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.101989 | orchestrator | 2026-04-04 00:57:33.101993 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.101997 | orchestrator | Saturday 04 April 2026 00:55:14 +0000 (0:00:00.576) 0:00:20.988 ******** 2026-04-04 00:57:33.102000 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102004 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.102008 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.102062 | orchestrator | 2026-04-04 00:57:33.102071 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-04 00:57:33.102078 | orchestrator | Saturday 04 April 2026 00:55:14 +0000 (0:00:00.459) 0:00:21.447 ******** 2026-04-04 00:57:33.102085 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 00:57:33.102092 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 00:57:33.102099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-04 00:57:33.102106 | orchestrator | 2026-04-04 00:57:33.102112 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-04 00:57:33.102116 | orchestrator | Saturday 04 April 2026 00:55:16 +0000 (0:00:01.770) 0:00:23.218 ******** 2026-04-04 00:57:33.102120 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:57:33.102124 | orchestrator | 2026-04-04 00:57:33.102128 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-04 00:57:33.102132 | orchestrator | Saturday 04 April 2026 00:55:17 +0000 (0:00:01.086) 0:00:24.305 ******** 2026-04-04 00:57:33.102136 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102140 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.102143 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.102147 | orchestrator | 2026-04-04 00:57:33.102151 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-04 00:57:33.102155 | orchestrator | Saturday 04 April 2026 00:55:18 +0000 (0:00:00.832) 0:00:25.138 ******** 2026-04-04 00:57:33.102159 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 00:57:33.102162 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 00:57:33.102166 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 00:57:33.102170 | orchestrator | 2026-04-04 00:57:33.102174 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-04 00:57:33.102178 | orchestrator | Saturday 04 April 2026 00:55:19 +0000 (0:00:01.203) 0:00:26.341 ******** 2026-04-04 00:57:33.102182 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.102186 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.102190 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.102194 | orchestrator | 2026-04-04 00:57:33.102198 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-04 00:57:33.102206 | orchestrator | Saturday 04 April 2026 00:55:20 +0000 (0:00:00.483) 0:00:26.825 ******** 2026-04-04 00:57:33.102210 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 00:57:33.102214 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 00:57:33.102221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-04 00:57:33.102225 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 00:57:33.102229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 00:57:33.102237 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-04 00:57:33.102241 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 00:57:33.102245 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 00:57:33.102249 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-04 00:57:33.102252 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 00:57:33.102256 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 00:57:33.102260 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-04 00:57:33.102264 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 00:57:33.102267 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 00:57:33.102271 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-04 00:57:33.102275 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 00:57:33.102279 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 00:57:33.102283 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 00:57:33.102286 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 00:57:33.102290 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 00:57:33.102294 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 00:57:33.102298 | orchestrator | 2026-04-04 00:57:33.102301 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-04 00:57:33.102305 | orchestrator | Saturday 04 April 2026 00:55:28 +0000 (0:00:08.208) 0:00:35.033 ******** 2026-04-04 00:57:33.102309 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 00:57:33.102313 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 00:57:33.102316 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 00:57:33.102321 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 00:57:33.102324 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 00:57:33.102328 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 00:57:33.102332 | orchestrator | 2026-04-04 00:57:33.102336 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-04 00:57:33.102339 | orchestrator | Saturday 04 April 2026 00:55:30 +0000 (0:00:02.516) 0:00:37.549 ******** 2026-04-04 00:57:33.102344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.102359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.102364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-04 00:57:33.102368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-04 00:57:33.102402 | orchestrator | 2026-04-04 00:57:33.102405 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.102409 | orchestrator | Saturday 04 April 2026 00:55:32 +0000 (0:00:02.072) 0:00:39.622 ******** 2026-04-04 00:57:33.102413 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102417 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.102420 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.102424 | orchestrator | 2026-04-04 00:57:33.102428 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-04 00:57:33.102432 | orchestrator | Saturday 04 April 2026 00:55:33 +0000 (0:00:00.435) 0:00:40.057 ******** 2026-04-04 00:57:33.102435 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102439 | orchestrator | 2026-04-04 00:57:33.102443 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-04 00:57:33.102447 | orchestrator | Saturday 04 April 2026 00:55:36 +0000 (0:00:02.691) 0:00:42.749 ******** 2026-04-04 00:57:33.102450 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102454 | orchestrator | 2026-04-04 00:57:33.102458 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-04 00:57:33.102461 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:02.985) 0:00:45.734 ******** 2026-04-04 00:57:33.102469 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.102473 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.102477 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.102480 | orchestrator | 2026-04-04 00:57:33.102484 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-04 00:57:33.102488 | orchestrator | Saturday 04 April 2026 00:55:39 +0000 (0:00:00.699) 0:00:46.434 ******** 2026-04-04 00:57:33.102492 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.102495 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.102499 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.102503 | orchestrator | 2026-04-04 00:57:33.102507 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-04 00:57:33.102510 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.258) 0:00:46.692 ******** 2026-04-04 00:57:33.102514 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102518 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.102521 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.102525 | orchestrator | 2026-04-04 00:57:33.102552 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-04 00:57:33.102556 | orchestrator | Saturday 04 April 2026 00:55:40 +0000 (0:00:00.299) 0:00:46.991 ******** 2026-04-04 00:57:33.102560 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102563 | orchestrator | 2026-04-04 00:57:33.102567 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-04 00:57:33.102571 | orchestrator | Saturday 04 April 2026 00:55:55 +0000 (0:00:15.172) 0:01:02.164 ******** 2026-04-04 00:57:33.102574 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102578 | orchestrator | 2026-04-04 00:57:33.102582 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 00:57:33.102586 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:10.082) 0:01:12.247 ******** 2026-04-04 00:57:33.102589 | orchestrator | 2026-04-04 00:57:33.102593 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 00:57:33.102597 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:00.057) 0:01:12.304 ******** 2026-04-04 00:57:33.102601 | orchestrator | 2026-04-04 00:57:33.102604 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-04 00:57:33.102608 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:00.057) 0:01:12.361 ******** 2026-04-04 00:57:33.102612 | orchestrator | 2026-04-04 00:57:33.102616 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-04 00:57:33.102620 | orchestrator | Saturday 04 April 2026 00:56:05 +0000 (0:00:00.061) 0:01:12.422 ******** 2026-04-04 00:57:33.102759 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102775 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:33.102779 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:33.102783 | orchestrator | 2026-04-04 00:57:33.102787 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-04 00:57:33.102791 | orchestrator | Saturday 04 April 2026 00:56:20 +0000 (0:00:14.851) 0:01:27.274 ******** 2026-04-04 00:57:33.102799 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102803 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:33.102807 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:33.102811 | orchestrator | 2026-04-04 00:57:33.102815 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-04 00:57:33.102818 | orchestrator | Saturday 04 April 2026 00:56:25 +0000 (0:00:05.180) 0:01:32.454 ******** 2026-04-04 00:57:33.102828 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102832 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:33.102836 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:33.102840 | orchestrator | 2026-04-04 00:57:33.102843 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.102847 | orchestrator | Saturday 04 April 2026 00:56:32 +0000 (0:00:06.238) 0:01:38.692 ******** 2026-04-04 00:57:33.102858 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 00:57:33.102862 | orchestrator | 2026-04-04 00:57:33.102866 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-04 00:57:33.102869 | orchestrator | Saturday 04 April 2026 00:56:32 +0000 (0:00:00.513) 0:01:39.205 ******** 2026-04-04 00:57:33.102873 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:33.102877 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.102881 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:33.102884 | orchestrator | 2026-04-04 00:57:33.102888 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-04 00:57:33.102892 | orchestrator | Saturday 04 April 2026 00:56:33 +0000 (0:00:00.686) 0:01:39.892 ******** 2026-04-04 00:57:33.102896 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:33.102900 | orchestrator | 2026-04-04 00:57:33.102904 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-04 00:57:33.102907 | orchestrator | Saturday 04 April 2026 00:56:34 +0000 (0:00:01.587) 0:01:41.480 ******** 2026-04-04 00:57:33.102911 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-04 00:57:33.102915 | orchestrator | 2026-04-04 00:57:33.102919 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-04 00:57:33.102923 | orchestrator | Saturday 04 April 2026 00:56:48 +0000 (0:00:13.246) 0:01:54.727 ******** 2026-04-04 00:57:33.102926 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-04 00:57:33.102930 | orchestrator | 2026-04-04 00:57:33.102934 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-04 00:57:33.102938 | orchestrator | Saturday 04 April 2026 00:57:18 +0000 (0:00:30.356) 0:02:25.083 ******** 2026-04-04 00:57:33.102942 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-04 00:57:33.102945 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-04 00:57:33.102949 | orchestrator | 2026-04-04 00:57:33.102953 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-04 00:57:33.102957 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:07.757) 0:02:32.841 ******** 2026-04-04 00:57:33.102961 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102964 | orchestrator | 2026-04-04 00:57:33.102968 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-04 00:57:33.102972 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:00.203) 0:02:33.044 ******** 2026-04-04 00:57:33.102976 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102979 | orchestrator | 2026-04-04 00:57:33.102983 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-04 00:57:33.102987 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:00.250) 0:02:33.295 ******** 2026-04-04 00:57:33.102991 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.102995 | orchestrator | 2026-04-04 00:57:33.102998 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-04 00:57:33.103002 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:00.207) 0:02:33.502 ******** 2026-04-04 00:57:33.103006 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.103009 | orchestrator | 2026-04-04 00:57:33.103013 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-04 00:57:33.103017 | orchestrator | Saturday 04 April 2026 00:57:27 +0000 (0:00:00.940) 0:02:34.443 ******** 2026-04-04 00:57:33.103021 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:33.103024 | orchestrator | 2026-04-04 00:57:33.103028 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-04 00:57:33.103032 | orchestrator | Saturday 04 April 2026 00:57:31 +0000 (0:00:03.868) 0:02:38.311 ******** 2026-04-04 00:57:33.103036 | orchestrator | skipping: [testbed-node-0] 2026-04-04 00:57:33.103040 | orchestrator | skipping: [testbed-node-1] 2026-04-04 00:57:33.103043 | orchestrator | skipping: [testbed-node-2] 2026-04-04 00:57:33.103050 | orchestrator | 2026-04-04 00:57:33.103054 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:33.103058 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 00:57:33.103063 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:57:33.103067 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 00:57:33.103071 | orchestrator | 2026-04-04 00:57:33.103075 | orchestrator | 2026-04-04 00:57:33.103079 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:33.103082 | orchestrator | Saturday 04 April 2026 00:57:32 +0000 (0:00:00.965) 0:02:39.276 ******** 2026-04-04 00:57:33.103086 | orchestrator | =============================================================================== 2026-04-04 00:57:33.103093 | orchestrator | service-ks-register : keystone | Creating services --------------------- 30.36s 2026-04-04 00:57:33.103108 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.17s 2026-04-04 00:57:33.103112 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.85s 2026-04-04 00:57:33.103116 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.25s 2026-04-04 00:57:33.103123 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.08s 2026-04-04 00:57:33.103132 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.21s 2026-04-04 00:57:33.103136 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.76s 2026-04-04 00:57:33.103140 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.24s 2026-04-04 00:57:33.103144 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.18s 2026-04-04 00:57:33.103148 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.94s 2026-04-04 00:57:33.103152 | orchestrator | keystone : Creating default user role ----------------------------------- 3.86s 2026-04-04 00:57:33.103156 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.05s 2026-04-04 00:57:33.103159 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.99s 2026-04-04 00:57:33.103163 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.87s 2026-04-04 00:57:33.103167 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.69s 2026-04-04 00:57:33.103171 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.52s 2026-04-04 00:57:33.103174 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.07s 2026-04-04 00:57:33.103178 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.91s 2026-04-04 00:57:33.103182 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.77s 2026-04-04 00:57:33.103186 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.59s 2026-04-04 00:57:33.103190 | orchestrator | 2026-04-04 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:36.134449 | orchestrator | 2026-04-04 00:57:36 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:36.134542 | orchestrator | 2026-04-04 00:57:36 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:36.134552 | orchestrator | 2026-04-04 00:57:36 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:36.134559 | orchestrator | 2026-04-04 00:57:36 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:36.134565 | orchestrator | 2026-04-04 00:57:36 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:36.134596 | orchestrator | 2026-04-04 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:39.156668 | orchestrator | 2026-04-04 00:57:39 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:39.156747 | orchestrator | 2026-04-04 00:57:39 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:39.157539 | orchestrator | 2026-04-04 00:57:39 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:39.158235 | orchestrator | 2026-04-04 00:57:39 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:39.159033 | orchestrator | 2026-04-04 00:57:39 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:39.159062 | orchestrator | 2026-04-04 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:42.203164 | orchestrator | 2026-04-04 00:57:42 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:42.203268 | orchestrator | 2026-04-04 00:57:42 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:42.204155 | orchestrator | 2026-04-04 00:57:42 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:42.204930 | orchestrator | 2026-04-04 00:57:42 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:42.205890 | orchestrator | 2026-04-04 00:57:42 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:42.205934 | orchestrator | 2026-04-04 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:45.240337 | orchestrator | 2026-04-04 00:57:45 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state STARTED 2026-04-04 00:57:45.240429 | orchestrator | 2026-04-04 00:57:45 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:45.243518 | orchestrator | 2026-04-04 00:57:45 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:45.243593 | orchestrator | 2026-04-04 00:57:45 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:45.243600 | orchestrator | 2026-04-04 00:57:45 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:45.243652 | orchestrator | 2026-04-04 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:48.280685 | orchestrator | 2026-04-04 00:57:48 | INFO  | Task e2de6bf7-5c23-4f85-90c5-52c63b8da46d is in state SUCCESS 2026-04-04 00:57:48.280761 | orchestrator | 2026-04-04 00:57:48 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:48.281473 | orchestrator | 2026-04-04 00:57:48 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:48.282056 | orchestrator | 2026-04-04 00:57:48 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:48.282701 | orchestrator | 2026-04-04 00:57:48 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:48.282730 | orchestrator | 2026-04-04 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:51.311006 | orchestrator | 2026-04-04 00:57:51 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state STARTED 2026-04-04 00:57:51.313650 | orchestrator | 2026-04-04 00:57:51 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:51.314401 | orchestrator | 2026-04-04 00:57:51 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:51.317646 | orchestrator | 2026-04-04 00:57:51 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:51.318264 | orchestrator | 2026-04-04 00:57:51 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:57:51.318288 | orchestrator | 2026-04-04 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:54.352827 | orchestrator | 2026-04-04 00:57:54 | INFO  | Task c3b71faa-38db-4c53-b63c-5bcb33a919ae is in state SUCCESS 2026-04-04 00:57:54.353168 | orchestrator | 2026-04-04 00:57:54.353191 | orchestrator | 2026-04-04 00:57:54.353197 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 00:57:54.353202 | orchestrator | 2026-04-04 00:57:54.353208 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 00:57:54.353213 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-04-04 00:57:54.353218 | orchestrator | ok: [testbed-node-0] 2026-04-04 00:57:54.353224 | orchestrator | ok: [testbed-node-1] 2026-04-04 00:57:54.353229 | orchestrator | ok: [testbed-node-2] 2026-04-04 00:57:54.353234 | orchestrator | ok: [testbed-node-3] 2026-04-04 00:57:54.353238 | orchestrator | ok: [testbed-node-4] 2026-04-04 00:57:54.353243 | orchestrator | ok: [testbed-node-5] 2026-04-04 00:57:54.353247 | orchestrator | ok: [testbed-manager] 2026-04-04 00:57:54.353252 | orchestrator | 2026-04-04 00:57:54.353256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 00:57:54.353261 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.654) 0:00:00.942 ******** 2026-04-04 00:57:54.353266 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353271 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353275 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353280 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353284 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353289 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353294 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-04 00:57:54.353298 | orchestrator | 2026-04-04 00:57:54.353303 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-04 00:57:54.353307 | orchestrator | 2026-04-04 00:57:54.353312 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-04 00:57:54.353316 | orchestrator | Saturday 04 April 2026 00:57:09 +0000 (0:00:00.753) 0:00:01.695 ******** 2026-04-04 00:57:54.353322 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-04 00:57:54.353328 | orchestrator | 2026-04-04 00:57:54.353332 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-04 00:57:54.353337 | orchestrator | Saturday 04 April 2026 00:57:10 +0000 (0:00:01.132) 0:00:02.828 ******** 2026-04-04 00:57:54.353342 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2026-04-04 00:57:54.353346 | orchestrator | 2026-04-04 00:57:54.353351 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-04 00:57:54.353355 | orchestrator | Saturday 04 April 2026 00:57:18 +0000 (0:00:07.607) 0:00:10.435 ******** 2026-04-04 00:57:54.353360 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-04 00:57:54.353367 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-04 00:57:54.353371 | orchestrator | 2026-04-04 00:57:54.353376 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-04 00:57:54.353393 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:08.110) 0:00:18.545 ******** 2026-04-04 00:57:54.353413 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 00:57:54.353418 | orchestrator | 2026-04-04 00:57:54.353423 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-04 00:57:54.353427 | orchestrator | Saturday 04 April 2026 00:57:30 +0000 (0:00:03.997) 0:00:22.543 ******** 2026-04-04 00:57:54.353432 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2026-04-04 00:57:54.353436 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 00:57:54.353441 | orchestrator | 2026-04-04 00:57:54.353446 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-04 00:57:54.353450 | orchestrator | Saturday 04 April 2026 00:57:34 +0000 (0:00:04.428) 0:00:26.971 ******** 2026-04-04 00:57:54.353455 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 00:57:54.353459 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2026-04-04 00:57:54.353464 | orchestrator | 2026-04-04 00:57:54.353469 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-04 00:57:54.353473 | orchestrator | Saturday 04 April 2026 00:57:42 +0000 (0:00:07.209) 0:00:34.180 ******** 2026-04-04 00:57:54.353478 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2026-04-04 00:57:54.353482 | orchestrator | 2026-04-04 00:57:54.353486 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:54.353491 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353496 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353501 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353506 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353510 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353522 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353527 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353531 | orchestrator | 2026-04-04 00:57:54.353536 | orchestrator | 2026-04-04 00:57:54.353540 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:54.353545 | orchestrator | Saturday 04 April 2026 00:57:47 +0000 (0:00:05.103) 0:00:39.283 ******** 2026-04-04 00:57:54.353549 | orchestrator | =============================================================================== 2026-04-04 00:57:54.353554 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.11s 2026-04-04 00:57:54.353558 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 7.61s 2026-04-04 00:57:54.353563 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.21s 2026-04-04 00:57:54.353567 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.10s 2026-04-04 00:57:54.353572 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.43s 2026-04-04 00:57:54.353577 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 4.00s 2026-04-04 00:57:54.353581 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.13s 2026-04-04 00:57:54.353586 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-04-04 00:57:54.353634 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2026-04-04 00:57:54.353638 | orchestrator | 2026-04-04 00:57:54.353643 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-04 00:57:54.353652 | orchestrator | 2.16.14 2026-04-04 00:57:54.353657 | orchestrator | 2026-04-04 00:57:54.353662 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-04 00:57:54.353666 | orchestrator | 2026-04-04 00:57:54.353671 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-04 00:57:54.353675 | orchestrator | Saturday 04 April 2026 00:57:02 +0000 (0:00:00.231) 0:00:00.232 ******** 2026-04-04 00:57:54.353679 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353684 | orchestrator | 2026-04-04 00:57:54.353689 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-04 00:57:54.353693 | orchestrator | Saturday 04 April 2026 00:57:04 +0000 (0:00:01.734) 0:00:01.967 ******** 2026-04-04 00:57:54.353698 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353702 | orchestrator | 2026-04-04 00:57:54.353707 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-04 00:57:54.353711 | orchestrator | Saturday 04 April 2026 00:57:05 +0000 (0:00:01.121) 0:00:03.088 ******** 2026-04-04 00:57:54.353716 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353720 | orchestrator | 2026-04-04 00:57:54.353725 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-04 00:57:54.353729 | orchestrator | Saturday 04 April 2026 00:57:06 +0000 (0:00:01.256) 0:00:04.345 ******** 2026-04-04 00:57:54.353734 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353738 | orchestrator | 2026-04-04 00:57:54.353742 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-04 00:57:54.353751 | orchestrator | Saturday 04 April 2026 00:57:07 +0000 (0:00:01.040) 0:00:05.385 ******** 2026-04-04 00:57:54.353755 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353760 | orchestrator | 2026-04-04 00:57:54.353764 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-04 00:57:54.353769 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.905) 0:00:06.291 ******** 2026-04-04 00:57:54.353773 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353777 | orchestrator | 2026-04-04 00:57:54.353783 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-04 00:57:54.353788 | orchestrator | Saturday 04 April 2026 00:57:09 +0000 (0:00:00.910) 0:00:07.201 ******** 2026-04-04 00:57:54.353793 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353798 | orchestrator | 2026-04-04 00:57:54.353804 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-04 00:57:54.353809 | orchestrator | Saturday 04 April 2026 00:57:10 +0000 (0:00:01.127) 0:00:08.329 ******** 2026-04-04 00:57:54.353815 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353820 | orchestrator | 2026-04-04 00:57:54.353826 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-04 00:57:54.353831 | orchestrator | Saturday 04 April 2026 00:57:11 +0000 (0:00:00.973) 0:00:09.303 ******** 2026-04-04 00:57:54.353836 | orchestrator | changed: [testbed-manager] 2026-04-04 00:57:54.353841 | orchestrator | 2026-04-04 00:57:54.353846 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-04 00:57:54.353852 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:14.477) 0:00:23.781 ******** 2026-04-04 00:57:54.353858 | orchestrator | skipping: [testbed-manager] 2026-04-04 00:57:54.353863 | orchestrator | 2026-04-04 00:57:54.353868 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 00:57:54.353873 | orchestrator | 2026-04-04 00:57:54.353878 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 00:57:54.353883 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:00.159) 0:00:23.941 ******** 2026-04-04 00:57:54.353888 | orchestrator | changed: [testbed-node-0] 2026-04-04 00:57:54.353893 | orchestrator | 2026-04-04 00:57:54.353898 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 00:57:54.353903 | orchestrator | 2026-04-04 00:57:54.353908 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 00:57:54.353917 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:11.974) 0:00:35.915 ******** 2026-04-04 00:57:54.353922 | orchestrator | changed: [testbed-node-1] 2026-04-04 00:57:54.353927 | orchestrator | 2026-04-04 00:57:54.353933 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-04 00:57:54.353938 | orchestrator | 2026-04-04 00:57:54.353943 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-04 00:57:54.353953 | orchestrator | Saturday 04 April 2026 00:57:39 +0000 (0:00:01.421) 0:00:37.337 ******** 2026-04-04 00:57:54.353958 | orchestrator | changed: [testbed-node-2] 2026-04-04 00:57:54.353963 | orchestrator | 2026-04-04 00:57:54.353968 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 00:57:54.353974 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-04 00:57:54.353979 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353984 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353989 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 00:57:54.353995 | orchestrator | 2026-04-04 00:57:54.354000 | orchestrator | 2026-04-04 00:57:54.354054 | orchestrator | 2026-04-04 00:57:54.354060 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 00:57:54.354065 | orchestrator | Saturday 04 April 2026 00:57:51 +0000 (0:00:11.416) 0:00:48.754 ******** 2026-04-04 00:57:54.354070 | orchestrator | =============================================================================== 2026-04-04 00:57:54.354075 | orchestrator | Restart ceph manager service ------------------------------------------- 24.81s 2026-04-04 00:57:54.354080 | orchestrator | Create admin user ------------------------------------------------------ 14.48s 2026-04-04 00:57:54.354086 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.74s 2026-04-04 00:57:54.354091 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.26s 2026-04-04 00:57:54.354096 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.13s 2026-04-04 00:57:54.354101 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2026-04-04 00:57:54.354106 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2026-04-04 00:57:54.354112 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.97s 2026-04-04 00:57:54.354117 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.91s 2026-04-04 00:57:54.354122 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.91s 2026-04-04 00:57:54.354128 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-04-04 00:57:54.354691 | orchestrator | 2026-04-04 00:57:54 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:54.355454 | orchestrator | 2026-04-04 00:57:54 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:54.356092 | orchestrator | 2026-04-04 00:57:54 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:54.357624 | orchestrator | 2026-04-04 00:57:54 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:57:54.357655 | orchestrator | 2026-04-04 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:57:57.383936 | orchestrator | 2026-04-04 00:57:57 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:57:57.384119 | orchestrator | 2026-04-04 00:57:57 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:57:57.384790 | orchestrator | 2026-04-04 00:57:57 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:57:57.385472 | orchestrator | 2026-04-04 00:57:57 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:57:57.385524 | orchestrator | 2026-04-04 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:00.413121 | orchestrator | 2026-04-04 00:58:00 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:00.413544 | orchestrator | 2026-04-04 00:58:00 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:00.414451 | orchestrator | 2026-04-04 00:58:00 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:00.415234 | orchestrator | 2026-04-04 00:58:00 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:00.415277 | orchestrator | 2026-04-04 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:03.465716 | orchestrator | 2026-04-04 00:58:03 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:03.466330 | orchestrator | 2026-04-04 00:58:03 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:03.467261 | orchestrator | 2026-04-04 00:58:03 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:03.467949 | orchestrator | 2026-04-04 00:58:03 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:03.468009 | orchestrator | 2026-04-04 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:06.501670 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:06.501922 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:06.502509 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:06.504218 | orchestrator | 2026-04-04 00:58:06 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:06.504276 | orchestrator | 2026-04-04 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:09.532850 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:09.533052 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:09.534515 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:09.535039 | orchestrator | 2026-04-04 00:58:09 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:09.535090 | orchestrator | 2026-04-04 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:12.560532 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:12.563667 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:12.569107 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:12.570888 | orchestrator | 2026-04-04 00:58:12 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:12.571070 | orchestrator | 2026-04-04 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:15.595408 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:15.596995 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:15.597390 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:15.598726 | orchestrator | 2026-04-04 00:58:15 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:15.598820 | orchestrator | 2026-04-04 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:18.616859 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:18.617346 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:18.617768 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:18.618489 | orchestrator | 2026-04-04 00:58:18 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:18.618519 | orchestrator | 2026-04-04 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:21.652649 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:21.652916 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:21.653855 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:21.654515 | orchestrator | 2026-04-04 00:58:21 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:21.654567 | orchestrator | 2026-04-04 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:24.699214 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:24.699303 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:24.699313 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:24.699321 | orchestrator | 2026-04-04 00:58:24 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:24.699328 | orchestrator | 2026-04-04 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:27.725183 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:27.725848 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:27.727630 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:27.728341 | orchestrator | 2026-04-04 00:58:27 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:27.728383 | orchestrator | 2026-04-04 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:30.751320 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:30.751403 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:30.752284 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:30.753030 | orchestrator | 2026-04-04 00:58:30 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:30.753062 | orchestrator | 2026-04-04 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:33.780899 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:33.781006 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:33.781024 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:33.781866 | orchestrator | 2026-04-04 00:58:33 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:33.782426 | orchestrator | 2026-04-04 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:36.819486 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:36.820725 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:36.825621 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:36.828670 | orchestrator | 2026-04-04 00:58:36 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:36.829246 | orchestrator | 2026-04-04 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:39.860363 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:39.863644 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:39.866463 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:39.869386 | orchestrator | 2026-04-04 00:58:39 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:39.869429 | orchestrator | 2026-04-04 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:42.896208 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:42.896782 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:42.898274 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:42.901569 | orchestrator | 2026-04-04 00:58:42 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:42.901619 | orchestrator | 2026-04-04 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:45.961551 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:45.961965 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:45.962476 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:45.964040 | orchestrator | 2026-04-04 00:58:45 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:45.964081 | orchestrator | 2026-04-04 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:48.986958 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:48.987455 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:48.988285 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:48.988880 | orchestrator | 2026-04-04 00:58:48 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:48.988905 | orchestrator | 2026-04-04 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:52.017392 | orchestrator | 2026-04-04 00:58:52 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:52.019453 | orchestrator | 2026-04-04 00:58:52 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:52.020167 | orchestrator | 2026-04-04 00:58:52 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:52.020851 | orchestrator | 2026-04-04 00:58:52 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:52.020885 | orchestrator | 2026-04-04 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:55.051035 | orchestrator | 2026-04-04 00:58:55 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:55.051437 | orchestrator | 2026-04-04 00:58:55 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:55.052092 | orchestrator | 2026-04-04 00:58:55 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:55.052828 | orchestrator | 2026-04-04 00:58:55 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:55.052853 | orchestrator | 2026-04-04 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:58:58.093542 | orchestrator | 2026-04-04 00:58:58 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:58:58.093831 | orchestrator | 2026-04-04 00:58:58 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:58:58.095310 | orchestrator | 2026-04-04 00:58:58 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:58:58.095813 | orchestrator | 2026-04-04 00:58:58 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:58:58.095901 | orchestrator | 2026-04-04 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:01.166924 | orchestrator | 2026-04-04 00:59:01 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:01.167665 | orchestrator | 2026-04-04 00:59:01 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:01.168323 | orchestrator | 2026-04-04 00:59:01 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:01.169277 | orchestrator | 2026-04-04 00:59:01 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:01.169329 | orchestrator | 2026-04-04 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:04.206539 | orchestrator | 2026-04-04 00:59:04 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:04.208428 | orchestrator | 2026-04-04 00:59:04 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:04.210300 | orchestrator | 2026-04-04 00:59:04 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:04.211664 | orchestrator | 2026-04-04 00:59:04 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:04.211706 | orchestrator | 2026-04-04 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:07.253322 | orchestrator | 2026-04-04 00:59:07 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:07.255874 | orchestrator | 2026-04-04 00:59:07 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:07.257584 | orchestrator | 2026-04-04 00:59:07 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:07.260833 | orchestrator | 2026-04-04 00:59:07 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:07.260887 | orchestrator | 2026-04-04 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:10.296146 | orchestrator | 2026-04-04 00:59:10 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:10.298872 | orchestrator | 2026-04-04 00:59:10 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:10.300019 | orchestrator | 2026-04-04 00:59:10 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:10.301004 | orchestrator | 2026-04-04 00:59:10 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:10.301826 | orchestrator | 2026-04-04 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:13.353439 | orchestrator | 2026-04-04 00:59:13 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:13.356283 | orchestrator | 2026-04-04 00:59:13 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:13.359117 | orchestrator | 2026-04-04 00:59:13 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:13.362704 | orchestrator | 2026-04-04 00:59:13 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:13.362788 | orchestrator | 2026-04-04 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:16.400937 | orchestrator | 2026-04-04 00:59:16 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:16.402318 | orchestrator | 2026-04-04 00:59:16 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:16.403935 | orchestrator | 2026-04-04 00:59:16 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:16.405281 | orchestrator | 2026-04-04 00:59:16 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:16.405548 | orchestrator | 2026-04-04 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:19.452312 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:19.454489 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:19.456635 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:19.458954 | orchestrator | 2026-04-04 00:59:19 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:19.459031 | orchestrator | 2026-04-04 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:22.501704 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:22.502636 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:22.503570 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:22.504557 | orchestrator | 2026-04-04 00:59:22 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:22.504576 | orchestrator | 2026-04-04 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:25.548566 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:25.548746 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:25.550339 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:25.551559 | orchestrator | 2026-04-04 00:59:25 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:25.551608 | orchestrator | 2026-04-04 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:28.597119 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:28.599064 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:28.603741 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:28.606520 | orchestrator | 2026-04-04 00:59:28 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:28.606582 | orchestrator | 2026-04-04 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:31.663758 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:31.664403 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:31.665670 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:31.666136 | orchestrator | 2026-04-04 00:59:31 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:31.666166 | orchestrator | 2026-04-04 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:34.701053 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:34.701650 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:34.703576 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:34.705762 | orchestrator | 2026-04-04 00:59:34 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:34.705808 | orchestrator | 2026-04-04 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:37.735636 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:37.737317 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:37.740248 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:37.744902 | orchestrator | 2026-04-04 00:59:37 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:37.745062 | orchestrator | 2026-04-04 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:40.787193 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:40.788396 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:40.791244 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:40.791849 | orchestrator | 2026-04-04 00:59:40 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:40.791970 | orchestrator | 2026-04-04 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:43.828545 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:43.829136 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:43.830068 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:43.830648 | orchestrator | 2026-04-04 00:59:43 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:43.830673 | orchestrator | 2026-04-04 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:46.864017 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:46.865822 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:46.867003 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:46.868563 | orchestrator | 2026-04-04 00:59:46 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:46.868620 | orchestrator | 2026-04-04 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:49.896435 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:49.896522 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:49.897070 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:49.898102 | orchestrator | 2026-04-04 00:59:49 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:49.898162 | orchestrator | 2026-04-04 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:52.969091 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:52.969177 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:52.969186 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:52.969194 | orchestrator | 2026-04-04 00:59:52 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:52.969201 | orchestrator | 2026-04-04 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:55.985291 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:55.985363 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:55.985966 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:55.986810 | orchestrator | 2026-04-04 00:59:55 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:55.986852 | orchestrator | 2026-04-04 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 00:59:59.030187 | orchestrator | 2026-04-04 00:59:59 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 00:59:59.034757 | orchestrator | 2026-04-04 00:59:59 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 00:59:59.037825 | orchestrator | 2026-04-04 00:59:59 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 00:59:59.039327 | orchestrator | 2026-04-04 00:59:59 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 00:59:59.039456 | orchestrator | 2026-04-04 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:02.081512 | orchestrator | 2026-04-04 01:00:02 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:02.082211 | orchestrator | 2026-04-04 01:00:02 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state STARTED 2026-04-04 01:00:02.083289 | orchestrator | 2026-04-04 01:00:02 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 01:00:02.084094 | orchestrator | 2026-04-04 01:00:02 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:02.084256 | orchestrator | 2026-04-04 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:05.118065 | orchestrator | 2026-04-04 01:00:05 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:05.120680 | orchestrator | 2026-04-04 01:00:05 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:05.125336 | orchestrator | 2026-04-04 01:00:05 | INFO  | Task 962c5246-689e-40c8-99ef-de97062b9030 is in state SUCCESS 2026-04-04 01:00:05.126902 | orchestrator | 2026-04-04 01:00:05.126944 | orchestrator | 2026-04-04 01:00:05.126949 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:00:05.126954 | orchestrator | 2026-04-04 01:00:05.126958 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:00:05.126962 | orchestrator | Saturday 04 April 2026 00:57:01 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-04-04 01:00:05.126967 | orchestrator | ok: [testbed-manager] 2026-04-04 01:00:05.126975 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:05.126982 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:05.126989 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:05.126996 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:00:05.127003 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:00:05.127009 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:00:05.127016 | orchestrator | 2026-04-04 01:00:05.127023 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:00:05.127031 | orchestrator | Saturday 04 April 2026 00:57:02 +0000 (0:00:00.708) 0:00:01.030 ******** 2026-04-04 01:00:05.127038 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127083 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127094 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127100 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127106 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127113 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127119 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-04 01:00:05.127156 | orchestrator | 2026-04-04 01:00:05.127163 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-04 01:00:05.127170 | orchestrator | 2026-04-04 01:00:05.127177 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-04 01:00:05.127184 | orchestrator | Saturday 04 April 2026 00:57:03 +0000 (0:00:00.886) 0:00:01.916 ******** 2026-04-04 01:00:05.127222 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:00:05.127506 | orchestrator | 2026-04-04 01:00:05.127520 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-04 01:00:05.127528 | orchestrator | Saturday 04 April 2026 00:57:04 +0000 (0:00:01.333) 0:00:03.250 ******** 2026-04-04 01:00:05.127537 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 01:00:05.127546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127801 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127845 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.127853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127941 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 01:00:05.127950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.127970 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.127976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.127983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.127989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128206 | orchestrator | 2026-04-04 01:00:05.128210 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-04 01:00:05.128214 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:04.185) 0:00:07.435 ******** 2026-04-04 01:00:05.128218 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:00:05.128222 | orchestrator | 2026-04-04 01:00:05.128226 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-04 01:00:05.128231 | orchestrator | Saturday 04 April 2026 00:57:10 +0000 (0:00:01.439) 0:00:08.875 ******** 2026-04-04 01:00:05.128235 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 01:00:05.128239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128274 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128278 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.128282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128290 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128318 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128322 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128326 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 01:00:05.128331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128625 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.128641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.128702 | orchestrator | 2026-04-04 01:00:05.128712 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-04 01:00:05.128763 | orchestrator | Saturday 04 April 2026 00:57:15 +0000 (0:00:05.408) 0:00:14.283 ******** 2026-04-04 01:00:05.128771 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-04 01:00:05.128778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.128786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.128793 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-04 01:00:05.128826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.128838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.128850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128854 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.128859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.128866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.128894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.128898 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.128902 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.129046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129084 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.129107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129116 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129120 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.129124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129140 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.129144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129171 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.129175 | orchestrator | 2026-04-04 01:00:05.129180 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-04 01:00:05.129184 | orchestrator | Saturday 04 April 2026 00:57:16 +0000 (0:00:01.281) 0:00:15.564 ******** 2026-04-04 01:00:05.129188 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-04 01:00:05.129195 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129203 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-04 01:00:05.129229 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129282 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.129293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129307 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.129311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129343 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.129347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-04 01:00:05.129386 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.129405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129418 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.129422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129437 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.129440 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-04 01:00:05.129445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-04 01:00:05.129466 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.129470 | orchestrator | 2026-04-04 01:00:05.129474 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-04 01:00:05.129478 | orchestrator | Saturday 04 April 2026 00:57:18 +0000 (0:00:01.850) 0:00:17.415 ******** 2026-04-04 01:00:05.129482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129487 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 01:00:05.129498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129553 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.129560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129616 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129648 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 01:00:05.129654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.129722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.129748 | orchestrator | 2026-04-04 01:00:05.129756 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-04 01:00:05.129765 | orchestrator | Saturday 04 April 2026 00:57:25 +0000 (0:00:06.353) 0:00:23.769 ******** 2026-04-04 01:00:05.129772 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:00:05.129779 | orchestrator | 2026-04-04 01:00:05.129786 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-04 01:00:05.129811 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:00.930) 0:00:24.699 ******** 2026-04-04 01:00:05.129819 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129845 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129853 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129861 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129868 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.129897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129917 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129932 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129939 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1086117, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129944 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129964 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129972 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129977 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129982 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.129987 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1086151, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8658652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129992 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.129997 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130043 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130054 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130059 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130064 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130078 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130088 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130104 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130108 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130112 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130116 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130120 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130124 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130151 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130155 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130159 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130163 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130167 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130171 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130195 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130200 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1086108, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.858638, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130212 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130216 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130243 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130250 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130256 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130267 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130278 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130287 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130310 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130318 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130323 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130329 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130335 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1086132, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8616855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130350 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130392 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130398 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130403 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130409 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130420 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130427 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130453 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130461 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130468 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130489 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130496 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130500 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130510 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130514 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1086084, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.856744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130522 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130533 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130537 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130548 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130552 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130556 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130560 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130567 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.130571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130575 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130579 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130587 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1086118, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8595786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130591 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130595 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130602 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.130606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130613 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130626 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130631 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130634 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130641 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130646 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1086129, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8611395, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130654 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130662 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130667 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.130675 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130682 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130686 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130690 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130694 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.130698 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130706 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1086121, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8598592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130717 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.130721 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130725 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-04 01:00:05.130729 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.130733 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1086115, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.859102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086150, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8647442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130740 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086079, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8533125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130749 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1086162, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8687563, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130753 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1086135, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.862551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130759 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1086105, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.857448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130763 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1086081, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.853744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130767 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1086124, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8603482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130771 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1086123, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.860088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130775 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1086160, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8677442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-04 01:00:05.130779 | orchestrator | 2026-04-04 01:00:05.130783 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-04 01:00:05.130789 | orchestrator | Saturday 04 April 2026 00:57:49 +0000 (0:00:23.354) 0:00:48.054 ******** 2026-04-04 01:00:05.130793 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:00:05.130797 | orchestrator | 2026-04-04 01:00:05.130802 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-04 01:00:05.130806 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:00.688) 0:00:48.742 ******** 2026-04-04 01:00:05.130810 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130821 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130825 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130829 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130833 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:00:05.130836 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130840 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130844 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130851 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130855 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-04 01:00:05.130859 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130867 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130870 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130874 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130878 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:00:05.130882 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130889 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130893 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130897 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130901 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:00:05.130904 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130908 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130912 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130916 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130919 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130923 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-04 01:00:05.130927 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130935 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130942 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130946 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:00:05.130950 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.130954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130957 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-04 01:00:05.130961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-04 01:00:05.130965 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-04 01:00:05.130969 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:00:05.130972 | orchestrator | 2026-04-04 01:00:05.130976 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-04 01:00:05.130980 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:01.928) 0:00:50.671 ******** 2026-04-04 01:00:05.130984 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.130991 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.130995 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.130998 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131002 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.131006 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131010 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.131014 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131017 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.131021 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131025 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-04 01:00:05.131029 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131032 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-04 01:00:05.131036 | orchestrator | 2026-04-04 01:00:05.131040 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-04 01:00:05.131047 | orchestrator | Saturday 04 April 2026 00:58:06 +0000 (0:00:14.072) 0:01:04.743 ******** 2026-04-04 01:00:05.131051 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131057 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131061 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131064 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131068 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131072 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131076 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131080 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131083 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131087 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131091 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-04 01:00:05.131095 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131098 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-04 01:00:05.131102 | orchestrator | 2026-04-04 01:00:05.131106 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-04 01:00:05.131110 | orchestrator | Saturday 04 April 2026 00:58:09 +0000 (0:00:03.568) 0:01:08.312 ******** 2026-04-04 01:00:05.131114 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131118 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131122 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131126 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131129 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131133 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131137 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-04 01:00:05.131141 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131147 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131151 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131155 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131159 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-04 01:00:05.131163 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131166 | orchestrator | 2026-04-04 01:00:05.131170 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-04 01:00:05.131174 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:02.203) 0:01:10.516 ******** 2026-04-04 01:00:05.131178 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:00:05.131182 | orchestrator | 2026-04-04 01:00:05.131186 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-04 01:00:05.131189 | orchestrator | Saturday 04 April 2026 00:58:12 +0000 (0:00:00.732) 0:01:11.248 ******** 2026-04-04 01:00:05.131193 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131197 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131201 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131205 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131208 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131212 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131216 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131220 | orchestrator | 2026-04-04 01:00:05.131223 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-04 01:00:05.131227 | orchestrator | Saturday 04 April 2026 00:58:13 +0000 (0:00:00.694) 0:01:11.943 ******** 2026-04-04 01:00:05.131231 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131235 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131239 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131242 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131246 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131250 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131254 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131257 | orchestrator | 2026-04-04 01:00:05.131261 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-04 01:00:05.131265 | orchestrator | Saturday 04 April 2026 00:58:15 +0000 (0:00:02.475) 0:01:14.418 ******** 2026-04-04 01:00:05.131269 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131273 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131276 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131280 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131284 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131288 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131293 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131297 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131301 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131307 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131311 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131315 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131318 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-04 01:00:05.131322 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131326 | orchestrator | 2026-04-04 01:00:05.131330 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-04 01:00:05.131337 | orchestrator | Saturday 04 April 2026 00:58:17 +0000 (0:00:01.972) 0:01:16.390 ******** 2026-04-04 01:00:05.131341 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131345 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131348 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131352 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131356 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131360 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131395 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131399 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131403 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131407 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131411 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-04 01:00:05.131414 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-04 01:00:05.131418 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131422 | orchestrator | 2026-04-04 01:00:05.131426 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-04 01:00:05.131430 | orchestrator | Saturday 04 April 2026 00:58:19 +0000 (0:00:01.622) 0:01:18.013 ******** 2026-04-04 01:00:05.131433 | orchestrator | [WARNING]: Skipped 2026-04-04 01:00:05.131437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-04 01:00:05.131441 | orchestrator | due to this access issue: 2026-04-04 01:00:05.131445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-04 01:00:05.131448 | orchestrator | not a directory 2026-04-04 01:00:05.131452 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:00:05.131456 | orchestrator | 2026-04-04 01:00:05.131460 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-04 01:00:05.131464 | orchestrator | Saturday 04 April 2026 00:58:20 +0000 (0:00:00.934) 0:01:18.948 ******** 2026-04-04 01:00:05.131467 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131471 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131475 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131479 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131482 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131486 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131490 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131493 | orchestrator | 2026-04-04 01:00:05.131497 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-04 01:00:05.131501 | orchestrator | Saturday 04 April 2026 00:58:21 +0000 (0:00:00.780) 0:01:19.728 ******** 2026-04-04 01:00:05.131505 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131508 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:05.131512 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:05.131516 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:05.131520 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:00:05.131523 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:00:05.131527 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:00:05.131531 | orchestrator | 2026-04-04 01:00:05.131535 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-04 01:00:05.131538 | orchestrator | Saturday 04 April 2026 00:58:21 +0000 (0:00:00.719) 0:01:20.448 ******** 2026-04-04 01:00:05.131543 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-04 01:00:05.131556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131568 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131576 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131580 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131593 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-04 01:00:05.131597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-04 01:00:05.131638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131667 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131685 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-04 01:00:05.131693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-04 01:00:05.131707 | orchestrator | 2026-04-04 01:00:05.131711 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-04 01:00:05.131715 | orchestrator | Saturday 04 April 2026 00:58:26 +0000 (0:00:05.014) 0:01:25.463 ******** 2026-04-04 01:00:05.131719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-04 01:00:05.131723 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:00:05.131726 | orchestrator | 2026-04-04 01:00:05.131730 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131734 | orchestrator | Saturday 04 April 2026 00:58:27 +0000 (0:00:01.033) 0:01:26.496 ******** 2026-04-04 01:00:05.131738 | orchestrator | 2026-04-04 01:00:05.131741 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131745 | orchestrator | Saturday 04 April 2026 00:58:27 +0000 (0:00:00.064) 0:01:26.560 ******** 2026-04-04 01:00:05.131749 | orchestrator | 2026-04-04 01:00:05.131753 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131757 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.078) 0:01:26.639 ******** 2026-04-04 01:00:05.131760 | orchestrator | 2026-04-04 01:00:05.131764 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131768 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.076) 0:01:26.715 ******** 2026-04-04 01:00:05.131772 | orchestrator | 2026-04-04 01:00:05.131775 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131779 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.066) 0:01:26.782 ******** 2026-04-04 01:00:05.131783 | orchestrator | 2026-04-04 01:00:05.131787 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131790 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.060) 0:01:26.842 ******** 2026-04-04 01:00:05.131794 | orchestrator | 2026-04-04 01:00:05.131798 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-04 01:00:05.131802 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.059) 0:01:26.902 ******** 2026-04-04 01:00:05.131805 | orchestrator | 2026-04-04 01:00:05.131809 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-04 01:00:05.131813 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.087) 0:01:26.990 ******** 2026-04-04 01:00:05.131819 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:05.131823 | orchestrator | 2026-04-04 01:00:05.131827 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-04 01:00:05.131833 | orchestrator | Saturday 04 April 2026 00:58:47 +0000 (0:00:18.748) 0:01:45.738 ******** 2026-04-04 01:00:05.131837 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131840 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131844 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131848 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:00:05.131851 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:05.131855 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:00:05.131859 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:00:05.131863 | orchestrator | 2026-04-04 01:00:05.131866 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-04 01:00:05.131870 | orchestrator | Saturday 04 April 2026 00:59:00 +0000 (0:00:12.895) 0:01:58.634 ******** 2026-04-04 01:00:05.131874 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131878 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131881 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131885 | orchestrator | 2026-04-04 01:00:05.131889 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-04 01:00:05.131893 | orchestrator | Saturday 04 April 2026 00:59:10 +0000 (0:00:10.799) 0:02:09.434 ******** 2026-04-04 01:00:05.131897 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131900 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131904 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131911 | orchestrator | 2026-04-04 01:00:05.131915 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-04 01:00:05.131918 | orchestrator | Saturday 04 April 2026 00:59:20 +0000 (0:00:10.014) 0:02:19.448 ******** 2026-04-04 01:00:05.131922 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131926 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131929 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:00:05.131933 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:00:05.131937 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:00:05.131940 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:05.131944 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131948 | orchestrator | 2026-04-04 01:00:05.131952 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-04 01:00:05.131955 | orchestrator | Saturday 04 April 2026 00:59:34 +0000 (0:00:13.683) 0:02:33.132 ******** 2026-04-04 01:00:05.131959 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:05.131963 | orchestrator | 2026-04-04 01:00:05.131967 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-04 01:00:05.131971 | orchestrator | Saturday 04 April 2026 00:59:41 +0000 (0:00:06.921) 0:02:40.053 ******** 2026-04-04 01:00:05.131974 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:05.131978 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:05.131982 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:05.131986 | orchestrator | 2026-04-04 01:00:05.131989 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-04 01:00:05.131993 | orchestrator | Saturday 04 April 2026 00:59:47 +0000 (0:00:05.759) 0:02:45.812 ******** 2026-04-04 01:00:05.131997 | orchestrator | changed: [testbed-manager] 2026-04-04 01:00:05.132001 | orchestrator | 2026-04-04 01:00:05.132004 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-04 01:00:05.132008 | orchestrator | Saturday 04 April 2026 00:59:51 +0000 (0:00:04.627) 0:02:50.440 ******** 2026-04-04 01:00:05.132012 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:00:05.132016 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:00:05.132019 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:00:05.132023 | orchestrator | 2026-04-04 01:00:05.132027 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:05.132031 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-04 01:00:05.132035 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 01:00:05.132039 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 01:00:05.132043 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-04 01:00:05.132046 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:00:05.132050 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:00:05.132054 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-04 01:00:05.132058 | orchestrator | 2026-04-04 01:00:05.132062 | orchestrator | 2026-04-04 01:00:05.132065 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:05.132069 | orchestrator | Saturday 04 April 2026 01:00:01 +0000 (0:00:09.962) 0:03:00.402 ******** 2026-04-04 01:00:05.132073 | orchestrator | =============================================================================== 2026-04-04 01:00:05.132080 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.35s 2026-04-04 01:00:05.132084 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.75s 2026-04-04 01:00:05.132088 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.07s 2026-04-04 01:00:05.132094 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.68s 2026-04-04 01:00:05.132098 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.90s 2026-04-04 01:00:05.132103 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.80s 2026-04-04 01:00:05.132107 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.01s 2026-04-04 01:00:05.132111 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.96s 2026-04-04 01:00:05.132114 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.92s 2026-04-04 01:00:05.132118 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.35s 2026-04-04 01:00:05.132122 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.76s 2026-04-04 01:00:05.132126 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.41s 2026-04-04 01:00:05.132129 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.01s 2026-04-04 01:00:05.132133 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.63s 2026-04-04 01:00:05.132138 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.19s 2026-04-04 01:00:05.132144 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.57s 2026-04-04 01:00:05.132150 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.48s 2026-04-04 01:00:05.132156 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.20s 2026-04-04 01:00:05.132166 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.97s 2026-04-04 01:00:05.132173 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.93s 2026-04-04 01:00:05.132179 | orchestrator | 2026-04-04 01:00:05 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state STARTED 2026-04-04 01:00:05.132185 | orchestrator | 2026-04-04 01:00:05 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:05.132192 | orchestrator | 2026-04-04 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:08.173553 | orchestrator | 2026-04-04 01:00:08 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:08.175547 | orchestrator | 2026-04-04 01:00:08 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:08.177883 | orchestrator | 2026-04-04 01:00:08 | INFO  | Task 82621c48-9205-4054-8135-e505244a9b3c is in state SUCCESS 2026-04-04 01:00:08.179768 | orchestrator | 2026-04-04 01:00:08.179825 | orchestrator | 2026-04-04 01:00:08.179838 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:00:08.179848 | orchestrator | 2026-04-04 01:00:08.179857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:00:08.179866 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.308) 0:00:00.308 ******** 2026-04-04 01:00:08.179875 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:08.179885 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:08.179893 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:08.179902 | orchestrator | 2026-04-04 01:00:08.179911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:00:08.179920 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.266) 0:00:00.574 ******** 2026-04-04 01:00:08.179928 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-04 01:00:08.179937 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-04 01:00:08.179965 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-04 01:00:08.179974 | orchestrator | 2026-04-04 01:00:08.179983 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-04 01:00:08.179992 | orchestrator | 2026-04-04 01:00:08.180001 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:00:08.180009 | orchestrator | Saturday 04 April 2026 00:57:08 +0000 (0:00:00.303) 0:00:00.878 ******** 2026-04-04 01:00:08.180018 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:08.180027 | orchestrator | 2026-04-04 01:00:08.180036 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-04 01:00:08.180181 | orchestrator | Saturday 04 April 2026 00:57:09 +0000 (0:00:00.556) 0:00:01.434 ******** 2026-04-04 01:00:08.180195 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-04 01:00:08.180204 | orchestrator | 2026-04-04 01:00:08.180213 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-04 01:00:08.180222 | orchestrator | Saturday 04 April 2026 00:57:18 +0000 (0:00:09.017) 0:00:10.451 ******** 2026-04-04 01:00:08.180230 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-04 01:00:08.180239 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-04 01:00:08.180248 | orchestrator | 2026-04-04 01:00:08.180257 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-04 01:00:08.180265 | orchestrator | Saturday 04 April 2026 00:57:26 +0000 (0:00:07.790) 0:00:18.242 ******** 2026-04-04 01:00:08.180274 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-04 01:00:08.180283 | orchestrator | 2026-04-04 01:00:08.180291 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-04 01:00:08.180300 | orchestrator | Saturday 04 April 2026 00:57:30 +0000 (0:00:03.883) 0:00:22.126 ******** 2026-04-04 01:00:08.180320 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-04 01:00:08.180330 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:00:08.180338 | orchestrator | 2026-04-04 01:00:08.180347 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-04 01:00:08.180356 | orchestrator | Saturday 04 April 2026 00:57:35 +0000 (0:00:04.847) 0:00:26.974 ******** 2026-04-04 01:00:08.180480 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:00:08.180490 | orchestrator | 2026-04-04 01:00:08.180499 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-04 01:00:08.180511 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:03.748) 0:00:30.722 ******** 2026-04-04 01:00:08.180529 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-04 01:00:08.180544 | orchestrator | 2026-04-04 01:00:08.180560 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-04 01:00:08.180575 | orchestrator | Saturday 04 April 2026 00:57:43 +0000 (0:00:04.257) 0:00:34.980 ******** 2026-04-04 01:00:08.180612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.180646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.180673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.180702 | orchestrator | 2026-04-04 01:00:08.180719 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:00:08.180735 | orchestrator | Saturday 04 April 2026 00:57:46 +0000 (0:00:03.522) 0:00:38.503 ******** 2026-04-04 01:00:08.180752 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:08.180769 | orchestrator | 2026-04-04 01:00:08.180785 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-04 01:00:08.180813 | orchestrator | Saturday 04 April 2026 00:57:47 +0000 (0:00:00.557) 0:00:39.061 ******** 2026-04-04 01:00:08.180830 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.180848 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:08.180865 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:08.180881 | orchestrator | 2026-04-04 01:00:08.180898 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-04 01:00:08.180914 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:03.698) 0:00:42.759 ******** 2026-04-04 01:00:08.180924 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.180932 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.180941 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.180950 | orchestrator | 2026-04-04 01:00:08.180959 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-04 01:00:08.180967 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:02.299) 0:00:45.059 ******** 2026-04-04 01:00:08.180976 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.180985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.180994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:08.181002 | orchestrator | 2026-04-04 01:00:08.181011 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-04 01:00:08.181020 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:01.298) 0:00:46.357 ******** 2026-04-04 01:00:08.181029 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:08.181038 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:08.181046 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:08.181055 | orchestrator | 2026-04-04 01:00:08.181064 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-04 01:00:08.181073 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.659) 0:00:47.016 ******** 2026-04-04 01:00:08.181081 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.181090 | orchestrator | 2026-04-04 01:00:08.181099 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-04 01:00:08.181110 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.107) 0:00:47.123 ******** 2026-04-04 01:00:08.181120 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.181130 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.181140 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.181152 | orchestrator | 2026-04-04 01:00:08.181162 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:00:08.181178 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.241) 0:00:47.365 ******** 2026-04-04 01:00:08.181188 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:08.181198 | orchestrator | 2026-04-04 01:00:08.181209 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-04 01:00:08.181219 | orchestrator | Saturday 04 April 2026 00:57:56 +0000 (0:00:00.556) 0:00:47.921 ******** 2026-04-04 01:00:08.181238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181329 | orchestrator | 2026-04-04 01:00:08.181345 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-04 01:00:08.181556 | orchestrator | Saturday 04 April 2026 00:57:59 +0000 (0:00:03.462) 0:00:51.384 ******** 2026-04-04 01:00:08.181585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181596 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.181612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181630 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.181646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181656 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.181665 | orchestrator | 2026-04-04 01:00:08.181674 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-04 01:00:08.181683 | orchestrator | Saturday 04 April 2026 00:58:02 +0000 (0:00:02.577) 0:00:53.962 ******** 2026-04-04 01:00:08.181696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181720 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.181729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181739 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.181755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-04 01:00:08.181764 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.181773 | orchestrator | 2026-04-04 01:00:08.181787 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-04 01:00:08.181796 | orchestrator | Saturday 04 April 2026 00:58:04 +0000 (0:00:02.825) 0:00:56.787 ******** 2026-04-04 01:00:08.181805 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.181813 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.181822 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.181830 | orchestrator | 2026-04-04 01:00:08.181839 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-04 01:00:08.181848 | orchestrator | Saturday 04 April 2026 00:58:08 +0000 (0:00:03.640) 0:01:00.428 ******** 2026-04-04 01:00:08.181861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.181906 | orchestrator | 2026-04-04 01:00:08.181915 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-04 01:00:08.181924 | orchestrator | Saturday 04 April 2026 00:58:12 +0000 (0:00:04.411) 0:01:04.839 ******** 2026-04-04 01:00:08.181932 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:08.181941 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:08.181949 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.181958 | orchestrator | 2026-04-04 01:00:08.181966 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-04 01:00:08.181975 | orchestrator | Saturday 04 April 2026 00:58:19 +0000 (0:00:06.676) 0:01:11.516 ******** 2026-04-04 01:00:08.181984 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.181992 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182001 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182010 | orchestrator | 2026-04-04 01:00:08.182059 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-04 01:00:08.182068 | orchestrator | Saturday 04 April 2026 00:58:23 +0000 (0:00:03.794) 0:01:15.311 ******** 2026-04-04 01:00:08.182077 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182086 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182095 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182103 | orchestrator | 2026-04-04 01:00:08.182112 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-04 01:00:08.182121 | orchestrator | Saturday 04 April 2026 00:58:26 +0000 (0:00:03.400) 0:01:18.712 ******** 2026-04-04 01:00:08.182129 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182138 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182152 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182161 | orchestrator | 2026-04-04 01:00:08.182170 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-04 01:00:08.182179 | orchestrator | Saturday 04 April 2026 00:58:29 +0000 (0:00:02.555) 0:01:21.268 ******** 2026-04-04 01:00:08.182188 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182200 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182210 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182220 | orchestrator | 2026-04-04 01:00:08.182231 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-04 01:00:08.182242 | orchestrator | Saturday 04 April 2026 00:58:32 +0000 (0:00:03.088) 0:01:24.356 ******** 2026-04-04 01:00:08.182257 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182267 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182278 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182288 | orchestrator | 2026-04-04 01:00:08.182297 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-04 01:00:08.182305 | orchestrator | Saturday 04 April 2026 00:58:32 +0000 (0:00:00.452) 0:01:24.808 ******** 2026-04-04 01:00:08.182314 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:00:08.182323 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182332 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:00:08.182341 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-04 01:00:08.182350 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182378 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182395 | orchestrator | 2026-04-04 01:00:08.182410 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-04 01:00:08.182425 | orchestrator | Saturday 04 April 2026 00:58:36 +0000 (0:00:03.693) 0:01:28.502 ******** 2026-04-04 01:00:08.182441 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182456 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182471 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182486 | orchestrator | 2026-04-04 01:00:08.182495 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-04 01:00:08.182504 | orchestrator | Saturday 04 April 2026 00:58:40 +0000 (0:00:03.853) 0:01:32.356 ******** 2026-04-04 01:00:08.182513 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182521 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182530 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182539 | orchestrator | 2026-04-04 01:00:08.182547 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-04 01:00:08.182556 | orchestrator | Saturday 04 April 2026 00:58:44 +0000 (0:00:03.758) 0:01:36.114 ******** 2026-04-04 01:00:08.182571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.182590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.182611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-04 01:00:08.182621 | orchestrator | 2026-04-04 01:00:08.182629 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-04 01:00:08.182638 | orchestrator | Saturday 04 April 2026 00:58:49 +0000 (0:00:04.877) 0:01:40.992 ******** 2026-04-04 01:00:08.182647 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:08.182655 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:08.182664 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:08.182672 | orchestrator | 2026-04-04 01:00:08.182681 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-04 01:00:08.182690 | orchestrator | Saturday 04 April 2026 00:58:49 +0000 (0:00:00.795) 0:01:41.787 ******** 2026-04-04 01:00:08.182699 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182724 | orchestrator | 2026-04-04 01:00:08.182742 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-04 01:00:08.182751 | orchestrator | Saturday 04 April 2026 00:58:52 +0000 (0:00:02.752) 0:01:44.540 ******** 2026-04-04 01:00:08.182760 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182769 | orchestrator | 2026-04-04 01:00:08.182777 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-04 01:00:08.182786 | orchestrator | Saturday 04 April 2026 00:58:55 +0000 (0:00:02.432) 0:01:46.972 ******** 2026-04-04 01:00:08.182795 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182804 | orchestrator | 2026-04-04 01:00:08.182813 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-04 01:00:08.182821 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:02.100) 0:01:49.073 ******** 2026-04-04 01:00:08.182830 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182840 | orchestrator | 2026-04-04 01:00:08.182849 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-04 01:00:08.182858 | orchestrator | Saturday 04 April 2026 00:59:29 +0000 (0:00:32.009) 0:02:21.083 ******** 2026-04-04 01:00:08.182866 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182875 | orchestrator | 2026-04-04 01:00:08.182889 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:00:08.182899 | orchestrator | Saturday 04 April 2026 00:59:31 +0000 (0:00:01.998) 0:02:23.081 ******** 2026-04-04 01:00:08.182907 | orchestrator | 2026-04-04 01:00:08.182917 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:00:08.182926 | orchestrator | Saturday 04 April 2026 00:59:31 +0000 (0:00:00.065) 0:02:23.147 ******** 2026-04-04 01:00:08.182935 | orchestrator | 2026-04-04 01:00:08.182944 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-04 01:00:08.182953 | orchestrator | Saturday 04 April 2026 00:59:31 +0000 (0:00:00.062) 0:02:23.209 ******** 2026-04-04 01:00:08.182962 | orchestrator | 2026-04-04 01:00:08.182970 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-04 01:00:08.182980 | orchestrator | Saturday 04 April 2026 00:59:31 +0000 (0:00:00.067) 0:02:23.276 ******** 2026-04-04 01:00:08.182988 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:08.182997 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:08.183006 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:08.183015 | orchestrator | 2026-04-04 01:00:08.183024 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:08.183034 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2026-04-04 01:00:08.183044 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:00:08.183053 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:00:08.183062 | orchestrator | 2026-04-04 01:00:08.183071 | orchestrator | 2026-04-04 01:00:08.183080 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:08.183089 | orchestrator | Saturday 04 April 2026 01:00:04 +0000 (0:00:33.551) 0:02:56.827 ******** 2026-04-04 01:00:08.183098 | orchestrator | =============================================================================== 2026-04-04 01:00:08.183107 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.55s 2026-04-04 01:00:08.183115 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 32.01s 2026-04-04 01:00:08.183124 | orchestrator | service-ks-register : glance | Creating services ------------------------ 9.02s 2026-04-04 01:00:08.183133 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.79s 2026-04-04 01:00:08.183141 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.68s 2026-04-04 01:00:08.183161 | orchestrator | glance : Check glance containers ---------------------------------------- 4.88s 2026-04-04 01:00:08.183192 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.85s 2026-04-04 01:00:08.183211 | orchestrator | glance : Copying over config.json files for services -------------------- 4.41s 2026-04-04 01:00:08.183227 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.26s 2026-04-04 01:00:08.183244 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.88s 2026-04-04 01:00:08.183261 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.85s 2026-04-04 01:00:08.183278 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.79s 2026-04-04 01:00:08.183294 | orchestrator | glance : Generating 'hostid' file for glance_api ------------------------ 3.76s 2026-04-04 01:00:08.183306 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.75s 2026-04-04 01:00:08.183315 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.70s 2026-04-04 01:00:08.183324 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.69s 2026-04-04 01:00:08.183332 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.64s 2026-04-04 01:00:08.183341 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.52s 2026-04-04 01:00:08.183349 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.46s 2026-04-04 01:00:08.183386 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.40s 2026-04-04 01:00:08.183399 | orchestrator | 2026-04-04 01:00:08 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:08.183408 | orchestrator | 2026-04-04 01:00:08 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:08.183417 | orchestrator | 2026-04-04 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:11.231827 | orchestrator | 2026-04-04 01:00:11 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:11.233786 | orchestrator | 2026-04-04 01:00:11 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:11.236385 | orchestrator | 2026-04-04 01:00:11 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:11.237712 | orchestrator | 2026-04-04 01:00:11 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:11.237754 | orchestrator | 2026-04-04 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:14.286225 | orchestrator | 2026-04-04 01:00:14 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:14.288820 | orchestrator | 2026-04-04 01:00:14 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:14.291668 | orchestrator | 2026-04-04 01:00:14 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:14.294061 | orchestrator | 2026-04-04 01:00:14 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:14.294474 | orchestrator | 2026-04-04 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:17.345029 | orchestrator | 2026-04-04 01:00:17 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:17.347589 | orchestrator | 2026-04-04 01:00:17 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:17.349079 | orchestrator | 2026-04-04 01:00:17 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:17.350200 | orchestrator | 2026-04-04 01:00:17 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:17.350251 | orchestrator | 2026-04-04 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:20.400487 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:20.403433 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:20.405565 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:20.407296 | orchestrator | 2026-04-04 01:00:20 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:20.407398 | orchestrator | 2026-04-04 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:23.448630 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:23.450851 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:23.452662 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:23.454751 | orchestrator | 2026-04-04 01:00:23 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:23.454925 | orchestrator | 2026-04-04 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:26.497916 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:26.500584 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state STARTED 2026-04-04 01:00:26.502705 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:26.503873 | orchestrator | 2026-04-04 01:00:26 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:26.503938 | orchestrator | 2026-04-04 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:29.553986 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:29.555661 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:29.558680 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 99958d9d-4491-4444-8750-a7910ae02d4b is in state SUCCESS 2026-04-04 01:00:29.560255 | orchestrator | 2026-04-04 01:00:29.560352 | orchestrator | 2026-04-04 01:00:29.560366 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:00:29.560436 | orchestrator | 2026-04-04 01:00:29.560443 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:00:29.560450 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:00.372) 0:00:00.372 ******** 2026-04-04 01:00:29.560456 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:00:29.560463 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:00:29.560470 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:00:29.560476 | orchestrator | 2026-04-04 01:00:29.560526 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:00:29.560531 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:00.371) 0:00:00.743 ******** 2026-04-04 01:00:29.560535 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-04 01:00:29.560539 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-04 01:00:29.560543 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-04 01:00:29.560547 | orchestrator | 2026-04-04 01:00:29.560551 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-04 01:00:29.560555 | orchestrator | 2026-04-04 01:00:29.560559 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:00:29.560590 | orchestrator | Saturday 04 April 2026 00:57:38 +0000 (0:00:00.303) 0:00:01.047 ******** 2026-04-04 01:00:29.560596 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:29.560600 | orchestrator | 2026-04-04 01:00:29.560604 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-04 01:00:29.560608 | orchestrator | Saturday 04 April 2026 00:57:39 +0000 (0:00:00.505) 0:00:01.553 ******** 2026-04-04 01:00:29.560612 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-04 01:00:29.560616 | orchestrator | 2026-04-04 01:00:29.560620 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-04 01:00:29.560624 | orchestrator | Saturday 04 April 2026 00:57:43 +0000 (0:00:04.235) 0:00:05.788 ******** 2026-04-04 01:00:29.560628 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-04 01:00:29.560632 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-04 01:00:29.560636 | orchestrator | 2026-04-04 01:00:29.560640 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-04 01:00:29.560643 | orchestrator | Saturday 04 April 2026 00:57:50 +0000 (0:00:07.537) 0:00:13.326 ******** 2026-04-04 01:00:29.560648 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:00:29.560656 | orchestrator | 2026-04-04 01:00:29.560663 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-04 01:00:29.560667 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:03.568) 0:00:16.895 ******** 2026-04-04 01:00:29.560671 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-04 01:00:29.560675 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:00:29.560678 | orchestrator | 2026-04-04 01:00:29.560682 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-04 01:00:29.560686 | orchestrator | Saturday 04 April 2026 00:57:58 +0000 (0:00:04.237) 0:00:21.132 ******** 2026-04-04 01:00:29.560689 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:00:29.560693 | orchestrator | 2026-04-04 01:00:29.560697 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-04 01:00:29.560701 | orchestrator | Saturday 04 April 2026 00:58:02 +0000 (0:00:03.582) 0:00:24.714 ******** 2026-04-04 01:00:29.560705 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-04 01:00:29.560708 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-04 01:00:29.560712 | orchestrator | 2026-04-04 01:00:29.560737 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-04 01:00:29.560741 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:09.060) 0:00:33.775 ******** 2026-04-04 01:00:29.560753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561155 | orchestrator | 2026-04-04 01:00:29.561159 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:00:29.561163 | orchestrator | Saturday 04 April 2026 00:58:14 +0000 (0:00:03.111) 0:00:36.886 ******** 2026-04-04 01:00:29.561170 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.561174 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.561178 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.561182 | orchestrator | 2026-04-04 01:00:29.561186 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:00:29.561189 | orchestrator | Saturday 04 April 2026 00:58:14 +0000 (0:00:00.429) 0:00:37.316 ******** 2026-04-04 01:00:29.561194 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:29.561197 | orchestrator | 2026-04-04 01:00:29.561201 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-04 01:00:29.561216 | orchestrator | Saturday 04 April 2026 00:58:15 +0000 (0:00:00.519) 0:00:37.836 ******** 2026-04-04 01:00:29.561222 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-04 01:00:29.561229 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-04 01:00:29.561236 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-04 01:00:29.561242 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-04 01:00:29.561249 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-04 01:00:29.561256 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-04 01:00:29.561263 | orchestrator | 2026-04-04 01:00:29.561270 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-04 01:00:29.561274 | orchestrator | Saturday 04 April 2026 00:58:18 +0000 (0:00:02.586) 0:00:40.423 ******** 2026-04-04 01:00:29.561278 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561284 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561290 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561298 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561315 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561332 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-04 01:00:29.561340 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561346 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561360 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561382 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561387 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561391 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-04 01:00:29.561395 | orchestrator | 2026-04-04 01:00:29.561399 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-04 01:00:29.561403 | orchestrator | Saturday 04 April 2026 00:58:21 +0000 (0:00:03.866) 0:00:44.290 ******** 2026-04-04 01:00:29.561407 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:29.561411 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:29.561415 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-04 01:00:29.561418 | orchestrator | 2026-04-04 01:00:29.561422 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-04 01:00:29.561426 | orchestrator | Saturday 04 April 2026 00:58:24 +0000 (0:00:02.357) 0:00:46.648 ******** 2026-04-04 01:00:29.561434 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-04 01:00:29.561437 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-04 01:00:29.561441 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-04 01:00:29.561445 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 01:00:29.561449 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 01:00:29.561452 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-04 01:00:29.561456 | orchestrator | 2026-04-04 01:00:29.561460 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-04 01:00:29.561466 | orchestrator | Saturday 04 April 2026 00:58:27 +0000 (0:00:03.163) 0:00:49.811 ******** 2026-04-04 01:00:29.561469 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-04 01:00:29.561473 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-04 01:00:29.561477 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-04 01:00:29.561481 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-04 01:00:29.561485 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-04 01:00:29.561488 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-04 01:00:29.561492 | orchestrator | 2026-04-04 01:00:29.561496 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-04 01:00:29.561499 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:01.129) 0:00:50.941 ******** 2026-04-04 01:00:29.561503 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.561507 | orchestrator | 2026-04-04 01:00:29.561511 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-04 01:00:29.561514 | orchestrator | Saturday 04 April 2026 00:58:28 +0000 (0:00:00.096) 0:00:51.038 ******** 2026-04-04 01:00:29.561518 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.561522 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.561525 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.561529 | orchestrator | 2026-04-04 01:00:29.561533 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:00:29.561537 | orchestrator | Saturday 04 April 2026 00:58:29 +0000 (0:00:00.357) 0:00:51.396 ******** 2026-04-04 01:00:29.561541 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:00:29.561556 | orchestrator | 2026-04-04 01:00:29.561561 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-04 01:00:29.561564 | orchestrator | Saturday 04 April 2026 00:58:29 +0000 (0:00:00.433) 0:00:51.829 ******** 2026-04-04 01:00:29.561569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561679 | orchestrator | 2026-04-04 01:00:29.561686 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-04 01:00:29.561692 | orchestrator | Saturday 04 April 2026 00:58:34 +0000 (0:00:04.859) 0:00:56.688 ******** 2026-04-04 01:00:29.561698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561733 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.561744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561778 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.561790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561831 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.561838 | orchestrator | 2026-04-04 01:00:29.561845 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-04 01:00:29.561849 | orchestrator | Saturday 04 April 2026 00:58:35 +0000 (0:00:01.017) 0:00:57.706 ******** 2026-04-04 01:00:29.561854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561885 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.561889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561907 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.561913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.561920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.561932 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.561936 | orchestrator | 2026-04-04 01:00:29.561940 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-04 01:00:29.561944 | orchestrator | Saturday 04 April 2026 00:58:36 +0000 (0:00:01.160) 0:00:58.867 ******** 2026-04-04 01:00:29.561950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.561967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.561989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562152 | orchestrator | 2026-04-04 01:00:29.562156 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-04 01:00:29.562160 | orchestrator | Saturday 04 April 2026 00:58:41 +0000 (0:00:04.824) 0:01:03.692 ******** 2026-04-04 01:00:29.562164 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-04 01:00:29.562168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-04 01:00:29.562172 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-04 01:00:29.562175 | orchestrator | 2026-04-04 01:00:29.562179 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-04 01:00:29.562187 | orchestrator | Saturday 04 April 2026 00:58:43 +0000 (0:00:02.184) 0:01:05.876 ******** 2026-04-04 01:00:29.562196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562260 | orchestrator | 2026-04-04 01:00:29.562264 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-04 01:00:29.562268 | orchestrator | Saturday 04 April 2026 00:58:56 +0000 (0:00:13.133) 0:01:19.010 ******** 2026-04-04 01:00:29.562272 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562276 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562279 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562283 | orchestrator | 2026-04-04 01:00:29.562287 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-04 01:00:29.562290 | orchestrator | Saturday 04 April 2026 00:58:57 +0000 (0:00:01.272) 0:01:20.282 ******** 2026-04-04 01:00:29.562294 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562298 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562302 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562305 | orchestrator | 2026-04-04 01:00:29.562309 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-04 01:00:29.562313 | orchestrator | Saturday 04 April 2026 00:58:59 +0000 (0:00:01.475) 0:01:21.758 ******** 2026-04-04 01:00:29.562317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.562340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562358 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.562365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.562370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562384 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.562390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-04 01:00:29.562394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-04 01:00:29.562409 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.562413 | orchestrator | 2026-04-04 01:00:29.562417 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-04 01:00:29.562420 | orchestrator | Saturday 04 April 2026 00:59:00 +0000 (0:00:00.896) 0:01:22.655 ******** 2026-04-04 01:00:29.562424 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.562428 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.562432 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.562435 | orchestrator | 2026-04-04 01:00:29.562439 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-04 01:00:29.562443 | orchestrator | Saturday 04 April 2026 00:59:00 +0000 (0:00:00.540) 0:01:23.195 ******** 2026-04-04 01:00:29.562447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-04 01:00:29.562466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-04 01:00:29.562513 | orchestrator | 2026-04-04 01:00:29.562517 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-04 01:00:29.562520 | orchestrator | Saturday 04 April 2026 00:59:04 +0000 (0:00:03.950) 0:01:27.146 ******** 2026-04-04 01:00:29.562524 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.562528 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:00:29.562532 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:00:29.562535 | orchestrator | 2026-04-04 01:00:29.562539 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-04 01:00:29.562543 | orchestrator | Saturday 04 April 2026 00:59:05 +0000 (0:00:00.267) 0:01:27.414 ******** 2026-04-04 01:00:29.562547 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562550 | orchestrator | 2026-04-04 01:00:29.562554 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-04 01:00:29.562560 | orchestrator | Saturday 04 April 2026 00:59:07 +0000 (0:00:01.996) 0:01:29.410 ******** 2026-04-04 01:00:29.562564 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562567 | orchestrator | 2026-04-04 01:00:29.562571 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-04 01:00:29.562575 | orchestrator | Saturday 04 April 2026 00:59:09 +0000 (0:00:02.164) 0:01:31.575 ******** 2026-04-04 01:00:29.562579 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562583 | orchestrator | 2026-04-04 01:00:29.562586 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:00:29.562590 | orchestrator | Saturday 04 April 2026 00:59:30 +0000 (0:00:21.345) 0:01:52.920 ******** 2026-04-04 01:00:29.562594 | orchestrator | 2026-04-04 01:00:29.562598 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:00:29.562601 | orchestrator | Saturday 04 April 2026 00:59:30 +0000 (0:00:00.063) 0:01:52.984 ******** 2026-04-04 01:00:29.562605 | orchestrator | 2026-04-04 01:00:29.562609 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-04 01:00:29.562612 | orchestrator | Saturday 04 April 2026 00:59:30 +0000 (0:00:00.063) 0:01:53.048 ******** 2026-04-04 01:00:29.562616 | orchestrator | 2026-04-04 01:00:29.562620 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-04 01:00:29.562624 | orchestrator | Saturday 04 April 2026 00:59:30 +0000 (0:00:00.062) 0:01:53.111 ******** 2026-04-04 01:00:29.562627 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562631 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562635 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562639 | orchestrator | 2026-04-04 01:00:29.562643 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-04 01:00:29.562649 | orchestrator | Saturday 04 April 2026 00:59:55 +0000 (0:00:25.067) 0:02:18.179 ******** 2026-04-04 01:00:29.562653 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562656 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562660 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562664 | orchestrator | 2026-04-04 01:00:29.562668 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-04 01:00:29.562672 | orchestrator | Saturday 04 April 2026 01:00:01 +0000 (0:00:05.336) 0:02:23.516 ******** 2026-04-04 01:00:29.562675 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562679 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562686 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562689 | orchestrator | 2026-04-04 01:00:29.562693 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-04 01:00:29.562697 | orchestrator | Saturday 04 April 2026 01:00:20 +0000 (0:00:19.522) 0:02:43.038 ******** 2026-04-04 01:00:29.562700 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:00:29.562704 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:00:29.562708 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:00:29.562712 | orchestrator | 2026-04-04 01:00:29.562715 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-04 01:00:29.562719 | orchestrator | Saturday 04 April 2026 01:00:27 +0000 (0:00:06.853) 0:02:49.891 ******** 2026-04-04 01:00:29.562723 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:00:29.562727 | orchestrator | 2026-04-04 01:00:29.562731 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:00:29.562735 | orchestrator | testbed-node-0 : ok=31  changed=23  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:00:29.562739 | orchestrator | testbed-node-1 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:00:29.562743 | orchestrator | testbed-node-2 : ok=22  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:00:29.562747 | orchestrator | 2026-04-04 01:00:29.562750 | orchestrator | 2026-04-04 01:00:29.562754 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:00:29.562758 | orchestrator | Saturday 04 April 2026 01:00:27 +0000 (0:00:00.222) 0:02:50.113 ******** 2026-04-04 01:00:29.562761 | orchestrator | =============================================================================== 2026-04-04 01:00:29.562765 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.07s 2026-04-04 01:00:29.562769 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.35s 2026-04-04 01:00:29.562772 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 19.52s 2026-04-04 01:00:29.562777 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.13s 2026-04-04 01:00:29.562782 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.06s 2026-04-04 01:00:29.562786 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.54s 2026-04-04 01:00:29.562790 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.85s 2026-04-04 01:00:29.562795 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.34s 2026-04-04 01:00:29.562799 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.86s 2026-04-04 01:00:29.562804 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.82s 2026-04-04 01:00:29.562808 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.24s 2026-04-04 01:00:29.562813 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.24s 2026-04-04 01:00:29.562817 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.95s 2026-04-04 01:00:29.562822 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.87s 2026-04-04 01:00:29.562827 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.58s 2026-04-04 01:00:29.562840 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.57s 2026-04-04 01:00:29.562845 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.16s 2026-04-04 01:00:29.562849 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.11s 2026-04-04 01:00:29.562854 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.59s 2026-04-04 01:00:29.562859 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.36s 2026-04-04 01:00:29.562865 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:29.562869 | orchestrator | 2026-04-04 01:00:29 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:29.562873 | orchestrator | 2026-04-04 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:32.593940 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:32.594431 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:32.595056 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:32.595949 | orchestrator | 2026-04-04 01:00:32 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:32.595992 | orchestrator | 2026-04-04 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:35.651216 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:35.651468 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:35.653182 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:35.655838 | orchestrator | 2026-04-04 01:00:35 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:35.655908 | orchestrator | 2026-04-04 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:38.694903 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:38.696665 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:38.698516 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:38.701236 | orchestrator | 2026-04-04 01:00:38 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:38.701292 | orchestrator | 2026-04-04 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:41.748825 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:41.749572 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:41.750841 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:41.753885 | orchestrator | 2026-04-04 01:00:41 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:41.753928 | orchestrator | 2026-04-04 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:44.782193 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:44.782562 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:44.784372 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:44.785063 | orchestrator | 2026-04-04 01:00:44 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:44.785108 | orchestrator | 2026-04-04 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:47.815921 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:47.816025 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:47.817561 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:47.819008 | orchestrator | 2026-04-04 01:00:47 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:47.819783 | orchestrator | 2026-04-04 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:50.851973 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:50.854063 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:50.854114 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:50.854480 | orchestrator | 2026-04-04 01:00:50 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:50.854528 | orchestrator | 2026-04-04 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:53.916941 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:53.917260 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:53.919042 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:53.919766 | orchestrator | 2026-04-04 01:00:53 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:53.919819 | orchestrator | 2026-04-04 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:56.948440 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:56.949019 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:56.949608 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:56.950373 | orchestrator | 2026-04-04 01:00:56 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:56.950406 | orchestrator | 2026-04-04 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:00:59.991155 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:00:59.991266 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:00:59.992069 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:00:59.992705 | orchestrator | 2026-04-04 01:00:59 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:00:59.992737 | orchestrator | 2026-04-04 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:03.023504 | orchestrator | 2026-04-04 01:01:03 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:03.024881 | orchestrator | 2026-04-04 01:01:03 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:03.026932 | orchestrator | 2026-04-04 01:01:03 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:03.028760 | orchestrator | 2026-04-04 01:01:03 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:03.029140 | orchestrator | 2026-04-04 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:06.058002 | orchestrator | 2026-04-04 01:01:06 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:06.060157 | orchestrator | 2026-04-04 01:01:06 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:06.060983 | orchestrator | 2026-04-04 01:01:06 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:06.061596 | orchestrator | 2026-04-04 01:01:06 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:06.061622 | orchestrator | 2026-04-04 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:09.093977 | orchestrator | 2026-04-04 01:01:09 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:09.094097 | orchestrator | 2026-04-04 01:01:09 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:09.095473 | orchestrator | 2026-04-04 01:01:09 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:09.095524 | orchestrator | 2026-04-04 01:01:09 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:09.095535 | orchestrator | 2026-04-04 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:12.117756 | orchestrator | 2026-04-04 01:01:12 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:12.117912 | orchestrator | 2026-04-04 01:01:12 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:12.118526 | orchestrator | 2026-04-04 01:01:12 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:12.119386 | orchestrator | 2026-04-04 01:01:12 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:12.119432 | orchestrator | 2026-04-04 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:15.149337 | orchestrator | 2026-04-04 01:01:15 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:15.149877 | orchestrator | 2026-04-04 01:01:15 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:15.150673 | orchestrator | 2026-04-04 01:01:15 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:15.151998 | orchestrator | 2026-04-04 01:01:15 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:15.152044 | orchestrator | 2026-04-04 01:01:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:18.179468 | orchestrator | 2026-04-04 01:01:18 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:18.179855 | orchestrator | 2026-04-04 01:01:18 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:18.181939 | orchestrator | 2026-04-04 01:01:18 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:18.182535 | orchestrator | 2026-04-04 01:01:18 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:18.182644 | orchestrator | 2026-04-04 01:01:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:21.224139 | orchestrator | 2026-04-04 01:01:21 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:21.225212 | orchestrator | 2026-04-04 01:01:21 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:21.225976 | orchestrator | 2026-04-04 01:01:21 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:21.226679 | orchestrator | 2026-04-04 01:01:21 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:21.226714 | orchestrator | 2026-04-04 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:24.259328 | orchestrator | 2026-04-04 01:01:24 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:24.259440 | orchestrator | 2026-04-04 01:01:24 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:24.259453 | orchestrator | 2026-04-04 01:01:24 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:24.259462 | orchestrator | 2026-04-04 01:01:24 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:24.259471 | orchestrator | 2026-04-04 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:27.281743 | orchestrator | 2026-04-04 01:01:27 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:27.281894 | orchestrator | 2026-04-04 01:01:27 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:27.282474 | orchestrator | 2026-04-04 01:01:27 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:27.282989 | orchestrator | 2026-04-04 01:01:27 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:27.283022 | orchestrator | 2026-04-04 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:30.302612 | orchestrator | 2026-04-04 01:01:30 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:30.303208 | orchestrator | 2026-04-04 01:01:30 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:30.303713 | orchestrator | 2026-04-04 01:01:30 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:30.304507 | orchestrator | 2026-04-04 01:01:30 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:30.304538 | orchestrator | 2026-04-04 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:33.327125 | orchestrator | 2026-04-04 01:01:33 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:33.327315 | orchestrator | 2026-04-04 01:01:33 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:33.327836 | orchestrator | 2026-04-04 01:01:33 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:33.328559 | orchestrator | 2026-04-04 01:01:33 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:33.328581 | orchestrator | 2026-04-04 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:36.351698 | orchestrator | 2026-04-04 01:01:36 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:36.351837 | orchestrator | 2026-04-04 01:01:36 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:36.352781 | orchestrator | 2026-04-04 01:01:36 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:36.353334 | orchestrator | 2026-04-04 01:01:36 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:36.353391 | orchestrator | 2026-04-04 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:39.372900 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:39.373059 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:39.373861 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:39.374153 | orchestrator | 2026-04-04 01:01:39 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:39.374163 | orchestrator | 2026-04-04 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:42.398957 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:42.400114 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:42.401034 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:42.401078 | orchestrator | 2026-04-04 01:01:42 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:42.401086 | orchestrator | 2026-04-04 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:45.469661 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:45.469836 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:45.470581 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:45.470944 | orchestrator | 2026-04-04 01:01:45 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:45.470975 | orchestrator | 2026-04-04 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:48.498217 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:48.498409 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:48.499297 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:48.499772 | orchestrator | 2026-04-04 01:01:48 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:48.499807 | orchestrator | 2026-04-04 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:51.528961 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:51.529045 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:51.529058 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:51.529585 | orchestrator | 2026-04-04 01:01:51 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:51.529633 | orchestrator | 2026-04-04 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:54.553354 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:54.577718 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:54.577771 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:54.577781 | orchestrator | 2026-04-04 01:01:54 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:54.577802 | orchestrator | 2026-04-04 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:01:57.580870 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:01:57.580943 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:01:57.582647 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state STARTED 2026-04-04 01:01:57.583474 | orchestrator | 2026-04-04 01:01:57 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:01:57.583512 | orchestrator | 2026-04-04 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:00.609942 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:00.611718 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:00.613804 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 412d9bf8-f33e-443e-8974-b614bc07d107 is in state SUCCESS 2026-04-04 01:02:00.615249 | orchestrator | 2026-04-04 01:02:00.615294 | orchestrator | 2026-04-04 01:02:00.615303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:02:00.615311 | orchestrator | 2026-04-04 01:02:00.615318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:02:00.615326 | orchestrator | Saturday 04 April 2026 01:00:08 +0000 (0:00:00.276) 0:00:00.276 ******** 2026-04-04 01:02:00.615333 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:02:00.615341 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:02:00.615348 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:02:00.615355 | orchestrator | 2026-04-04 01:02:00.615362 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:02:00.615369 | orchestrator | Saturday 04 April 2026 01:00:08 +0000 (0:00:00.277) 0:00:00.554 ******** 2026-04-04 01:02:00.615375 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-04 01:02:00.615383 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-04 01:02:00.615389 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-04 01:02:00.615396 | orchestrator | 2026-04-04 01:02:00.615403 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-04 01:02:00.615410 | orchestrator | 2026-04-04 01:02:00.615417 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:02:00.615424 | orchestrator | Saturday 04 April 2026 01:00:08 +0000 (0:00:00.279) 0:00:00.833 ******** 2026-04-04 01:02:00.615431 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:02:00.615438 | orchestrator | 2026-04-04 01:02:00.615445 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-04 01:02:00.615515 | orchestrator | Saturday 04 April 2026 01:00:09 +0000 (0:00:00.526) 0:00:01.359 ******** 2026-04-04 01:02:00.615722 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-04 01:02:00.615735 | orchestrator | 2026-04-04 01:02:00.615742 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-04 01:02:00.615750 | orchestrator | Saturday 04 April 2026 01:00:13 +0000 (0:00:03.707) 0:00:05.067 ******** 2026-04-04 01:02:00.615757 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-04 01:02:00.615764 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-04 01:02:00.615771 | orchestrator | 2026-04-04 01:02:00.615778 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-04 01:02:00.615785 | orchestrator | Saturday 04 April 2026 01:00:19 +0000 (0:00:06.660) 0:00:11.727 ******** 2026-04-04 01:02:00.615806 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:02:00.615814 | orchestrator | 2026-04-04 01:02:00.615821 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-04 01:02:00.615827 | orchestrator | Saturday 04 April 2026 01:00:23 +0000 (0:00:03.669) 0:00:15.397 ******** 2026-04-04 01:02:00.615834 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-04 01:02:00.615841 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:02:00.615849 | orchestrator | 2026-04-04 01:02:00.615855 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-04 01:02:00.615862 | orchestrator | Saturday 04 April 2026 01:00:28 +0000 (0:00:04.789) 0:00:20.186 ******** 2026-04-04 01:02:00.615869 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:02:00.615876 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-04 01:02:00.615883 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-04 01:02:00.615890 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-04 01:02:00.615897 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-04 01:02:00.615903 | orchestrator | 2026-04-04 01:02:00.615923 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-04 01:02:00.615939 | orchestrator | Saturday 04 April 2026 01:00:45 +0000 (0:00:16.836) 0:00:37.023 ******** 2026-04-04 01:02:00.615946 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-04 01:02:00.615953 | orchestrator | 2026-04-04 01:02:00.615960 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-04 01:02:00.615967 | orchestrator | Saturday 04 April 2026 01:00:49 +0000 (0:00:04.347) 0:00:41.370 ******** 2026-04-04 01:02:00.615976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.615995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616072 | orchestrator | 2026-04-04 01:02:00.616077 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-04 01:02:00.616083 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:02.045) 0:00:43.416 ******** 2026-04-04 01:02:00.616089 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-04 01:02:00.616094 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-04 01:02:00.616100 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-04 01:02:00.616106 | orchestrator | 2026-04-04 01:02:00.616111 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-04 01:02:00.616124 | orchestrator | Saturday 04 April 2026 01:00:53 +0000 (0:00:01.746) 0:00:45.162 ******** 2026-04-04 01:02:00.616131 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616138 | orchestrator | 2026-04-04 01:02:00.616143 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-04 01:02:00.616149 | orchestrator | Saturday 04 April 2026 01:00:53 +0000 (0:00:00.210) 0:00:45.373 ******** 2026-04-04 01:02:00.616154 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616161 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.616167 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.616174 | orchestrator | 2026-04-04 01:02:00.616198 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:02:00.616205 | orchestrator | Saturday 04 April 2026 01:00:53 +0000 (0:00:00.429) 0:00:45.803 ******** 2026-04-04 01:02:00.616213 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:02:00.616220 | orchestrator | 2026-04-04 01:02:00.616227 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-04 01:02:00.616234 | orchestrator | Saturday 04 April 2026 01:00:54 +0000 (0:00:00.699) 0:00:46.502 ******** 2026-04-04 01:02:00.616246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616332 | orchestrator | 2026-04-04 01:02:00.616338 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-04 01:02:00.616345 | orchestrator | Saturday 04 April 2026 01:00:57 +0000 (0:00:03.058) 0:00:49.560 ******** 2026-04-04 01:02:00.616351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616376 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616414 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.616422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616447 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.616454 | orchestrator | 2026-04-04 01:02:00.616461 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-04 01:02:00.616472 | orchestrator | Saturday 04 April 2026 01:00:58 +0000 (0:00:00.984) 0:00:50.545 ******** 2026-04-04 01:02:00.616483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616504 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616544 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.616555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616577 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.616584 | orchestrator | 2026-04-04 01:02:00.616591 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-04 01:02:00.616597 | orchestrator | Saturday 04 April 2026 01:01:00 +0000 (0:00:01.483) 0:00:52.028 ******** 2026-04-04 01:02:00.616607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616693 | orchestrator | 2026-04-04 01:02:00.616700 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-04 01:02:00.616707 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:03.539) 0:00:55.567 ******** 2026-04-04 01:02:00.616714 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.616721 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:00.616728 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:00.616735 | orchestrator | 2026-04-04 01:02:00.616741 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-04 01:02:00.616748 | orchestrator | Saturday 04 April 2026 01:01:05 +0000 (0:00:02.130) 0:00:57.698 ******** 2026-04-04 01:02:00.616755 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:02:00.616762 | orchestrator | 2026-04-04 01:02:00.616769 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-04 01:02:00.616776 | orchestrator | Saturday 04 April 2026 01:01:06 +0000 (0:00:01.107) 0:00:58.805 ******** 2026-04-04 01:02:00.616783 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616790 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.616798 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.616805 | orchestrator | 2026-04-04 01:02:00.616812 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-04 01:02:00.616820 | orchestrator | Saturday 04 April 2026 01:01:07 +0000 (0:00:00.939) 0:00:59.745 ******** 2026-04-04 01:02:00.616827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.616860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.616904 | orchestrator | 2026-04-04 01:02:00.616910 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-04 01:02:00.616916 | orchestrator | Saturday 04 April 2026 01:01:15 +0000 (0:00:08.070) 0:01:07.815 ******** 2026-04-04 01:02:00.616926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616951 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.616960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.616982 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.616988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-04 01:02:00.616994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.617003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:02:00.617010 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.617016 | orchestrator | 2026-04-04 01:02:00.617022 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-04 01:02:00.617027 | orchestrator | Saturday 04 April 2026 01:01:16 +0000 (0:00:00.943) 0:01:08.758 ******** 2026-04-04 01:02:00.617036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.617046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.617053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-04 01:02:00.617060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:02:00.617107 | orchestrator | 2026-04-04 01:02:00.617113 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-04 01:02:00.617119 | orchestrator | Saturday 04 April 2026 01:01:19 +0000 (0:00:02.924) 0:01:11.683 ******** 2026-04-04 01:02:00.617125 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:02:00.617136 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:02:00.617142 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:02:00.617148 | orchestrator | 2026-04-04 01:02:00.617154 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-04 01:02:00.617160 | orchestrator | Saturday 04 April 2026 01:01:20 +0000 (0:00:00.233) 0:01:11.916 ******** 2026-04-04 01:02:00.617165 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617171 | orchestrator | 2026-04-04 01:02:00.617190 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-04 01:02:00.617197 | orchestrator | Saturday 04 April 2026 01:01:22 +0000 (0:00:02.097) 0:01:14.014 ******** 2026-04-04 01:02:00.617203 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617209 | orchestrator | 2026-04-04 01:02:00.617215 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-04 01:02:00.617221 | orchestrator | Saturday 04 April 2026 01:01:24 +0000 (0:00:02.697) 0:01:16.712 ******** 2026-04-04 01:02:00.617227 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617233 | orchestrator | 2026-04-04 01:02:00.617238 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:02:00.617245 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:11.664) 0:01:28.376 ******** 2026-04-04 01:02:00.617251 | orchestrator | 2026-04-04 01:02:00.617257 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:02:00.617263 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:00.304) 0:01:28.681 ******** 2026-04-04 01:02:00.617268 | orchestrator | 2026-04-04 01:02:00.617274 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-04 01:02:00.617280 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:00.102) 0:01:28.784 ******** 2026-04-04 01:02:00.617286 | orchestrator | 2026-04-04 01:02:00.617293 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-04 01:02:00.617298 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:00.106) 0:01:28.890 ******** 2026-04-04 01:02:00.617305 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:00.617311 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:00.617317 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617323 | orchestrator | 2026-04-04 01:02:00.617329 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-04 01:02:00.617339 | orchestrator | Saturday 04 April 2026 01:01:46 +0000 (0:00:09.540) 0:01:38.430 ******** 2026-04-04 01:02:00.617346 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617352 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:00.617358 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:00.617364 | orchestrator | 2026-04-04 01:02:00.617370 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-04 01:02:00.617376 | orchestrator | Saturday 04 April 2026 01:01:53 +0000 (0:00:07.034) 0:01:45.465 ******** 2026-04-04 01:02:00.617383 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:02:00.617389 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:02:00.617396 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:02:00.617402 | orchestrator | 2026-04-04 01:02:00.617409 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:02:00.617417 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:02:00.617424 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:02:00.617431 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:02:00.617438 | orchestrator | 2026-04-04 01:02:00.617444 | orchestrator | 2026-04-04 01:02:00.617451 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:02:00.617458 | orchestrator | Saturday 04 April 2026 01:01:59 +0000 (0:00:06.083) 0:01:51.548 ******** 2026-04-04 01:02:00.617471 | orchestrator | =============================================================================== 2026-04-04 01:02:00.617478 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.84s 2026-04-04 01:02:00.617491 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.67s 2026-04-04 01:02:00.617497 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.54s 2026-04-04 01:02:00.617504 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.07s 2026-04-04 01:02:00.617511 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.04s 2026-04-04 01:02:00.617518 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.66s 2026-04-04 01:02:00.617525 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 6.08s 2026-04-04 01:02:00.617531 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.79s 2026-04-04 01:02:00.617538 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.35s 2026-04-04 01:02:00.617545 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.71s 2026-04-04 01:02:00.617552 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.67s 2026-04-04 01:02:00.617558 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.54s 2026-04-04 01:02:00.617565 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.06s 2026-04-04 01:02:00.617571 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.92s 2026-04-04 01:02:00.617577 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.70s 2026-04-04 01:02:00.617584 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.13s 2026-04-04 01:02:00.617591 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.10s 2026-04-04 01:02:00.617597 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.05s 2026-04-04 01:02:00.617604 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.75s 2026-04-04 01:02:00.617611 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.48s 2026-04-04 01:02:00.617618 | orchestrator | 2026-04-04 01:02:00 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:00.617625 | orchestrator | 2026-04-04 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:03.649563 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:03.649882 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:03.650469 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:03.650924 | orchestrator | 2026-04-04 01:02:03 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:03.650936 | orchestrator | 2026-04-04 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:06.679586 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:06.679952 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:06.680666 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:06.681158 | orchestrator | 2026-04-04 01:02:06 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:06.681249 | orchestrator | 2026-04-04 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:09.708502 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:09.708927 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:09.710214 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:09.710798 | orchestrator | 2026-04-04 01:02:09 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:09.710850 | orchestrator | 2026-04-04 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:12.741030 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:12.742116 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:12.742932 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:12.743687 | orchestrator | 2026-04-04 01:02:12 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:12.743722 | orchestrator | 2026-04-04 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:15.780870 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:15.782892 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:15.784854 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:15.786989 | orchestrator | 2026-04-04 01:02:15 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:15.787023 | orchestrator | 2026-04-04 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:18.831052 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:18.832727 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:18.835054 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:18.838363 | orchestrator | 2026-04-04 01:02:18 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:18.838416 | orchestrator | 2026-04-04 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:21.871517 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:21.872999 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:21.873862 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:21.874595 | orchestrator | 2026-04-04 01:02:21 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:21.874615 | orchestrator | 2026-04-04 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:24.898707 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:24.899061 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:24.899906 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:24.900618 | orchestrator | 2026-04-04 01:02:24 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:24.900699 | orchestrator | 2026-04-04 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:27.928842 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:27.929387 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:27.930357 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:27.930966 | orchestrator | 2026-04-04 01:02:27 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:27.931198 | orchestrator | 2026-04-04 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:30.961715 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:30.963206 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:30.963628 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:30.964043 | orchestrator | 2026-04-04 01:02:30 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:30.964101 | orchestrator | 2026-04-04 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:33.995971 | orchestrator | 2026-04-04 01:02:33 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:34.000320 | orchestrator | 2026-04-04 01:02:34 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:34.000713 | orchestrator | 2026-04-04 01:02:34 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:34.002438 | orchestrator | 2026-04-04 01:02:34 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:34.002481 | orchestrator | 2026-04-04 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:37.031183 | orchestrator | 2026-04-04 01:02:37 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:37.032749 | orchestrator | 2026-04-04 01:02:37 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:37.034312 | orchestrator | 2026-04-04 01:02:37 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:37.036305 | orchestrator | 2026-04-04 01:02:37 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:37.036343 | orchestrator | 2026-04-04 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:40.073287 | orchestrator | 2026-04-04 01:02:40 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:40.075624 | orchestrator | 2026-04-04 01:02:40 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:40.078002 | orchestrator | 2026-04-04 01:02:40 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:40.079953 | orchestrator | 2026-04-04 01:02:40 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:40.080063 | orchestrator | 2026-04-04 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:43.114411 | orchestrator | 2026-04-04 01:02:43 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:43.115245 | orchestrator | 2026-04-04 01:02:43 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:43.116142 | orchestrator | 2026-04-04 01:02:43 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:43.117754 | orchestrator | 2026-04-04 01:02:43 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:43.117788 | orchestrator | 2026-04-04 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:46.155111 | orchestrator | 2026-04-04 01:02:46 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:46.158334 | orchestrator | 2026-04-04 01:02:46 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:46.160570 | orchestrator | 2026-04-04 01:02:46 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:46.160607 | orchestrator | 2026-04-04 01:02:46 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:46.160612 | orchestrator | 2026-04-04 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:49.202379 | orchestrator | 2026-04-04 01:02:49 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:49.203032 | orchestrator | 2026-04-04 01:02:49 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:49.203804 | orchestrator | 2026-04-04 01:02:49 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:49.204595 | orchestrator | 2026-04-04 01:02:49 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:49.204611 | orchestrator | 2026-04-04 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:52.244153 | orchestrator | 2026-04-04 01:02:52 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:52.244395 | orchestrator | 2026-04-04 01:02:52 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:52.245240 | orchestrator | 2026-04-04 01:02:52 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:52.245888 | orchestrator | 2026-04-04 01:02:52 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:52.246010 | orchestrator | 2026-04-04 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:55.285895 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:55.287280 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:55.288578 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:55.289931 | orchestrator | 2026-04-04 01:02:55 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:55.289986 | orchestrator | 2026-04-04 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:02:58.316761 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:02:58.318797 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:02:58.319538 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:02:58.320228 | orchestrator | 2026-04-04 01:02:58 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:02:58.320252 | orchestrator | 2026-04-04 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:01.353768 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:01.354255 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:03:01.355053 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:01.355835 | orchestrator | 2026-04-04 01:03:01 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:01.355886 | orchestrator | 2026-04-04 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:04.380801 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:04.381755 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:03:04.383930 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:04.384695 | orchestrator | 2026-04-04 01:03:04 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:04.384721 | orchestrator | 2026-04-04 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:07.409560 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:07.409615 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:03:07.409854 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:07.410592 | orchestrator | 2026-04-04 01:03:07 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:07.410626 | orchestrator | 2026-04-04 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:10.445982 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:10.447792 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state STARTED 2026-04-04 01:03:10.450150 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:10.451031 | orchestrator | 2026-04-04 01:03:10 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:10.451067 | orchestrator | 2026-04-04 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:13.479154 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:13.479723 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:13.482323 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task ba32320c-ef60-4114-97c4-e851f97efd30 is in state SUCCESS 2026-04-04 01:03:13.483398 | orchestrator | 2026-04-04 01:03:13.483436 | orchestrator | 2026-04-04 01:03:13.483444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:03:13.483452 | orchestrator | 2026-04-04 01:03:13.483459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:03:13.483465 | orchestrator | Saturday 04 April 2026 01:00:30 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-04-04 01:03:13.483472 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:03:13.483479 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:03:13.483485 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:03:13.483492 | orchestrator | 2026-04-04 01:03:13.483498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:03:13.483504 | orchestrator | Saturday 04 April 2026 01:00:31 +0000 (0:00:00.285) 0:00:00.571 ******** 2026-04-04 01:03:13.483527 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-04 01:03:13.483534 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-04 01:03:13.483540 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-04 01:03:13.483547 | orchestrator | 2026-04-04 01:03:13.483553 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-04 01:03:13.483559 | orchestrator | 2026-04-04 01:03:13.483566 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:03:13.483572 | orchestrator | Saturday 04 April 2026 01:00:31 +0000 (0:00:00.280) 0:00:00.852 ******** 2026-04-04 01:03:13.483578 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:13.483585 | orchestrator | 2026-04-04 01:03:13.483591 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-04 01:03:13.483598 | orchestrator | Saturday 04 April 2026 01:00:31 +0000 (0:00:00.584) 0:00:01.436 ******** 2026-04-04 01:03:13.483650 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-04 01:03:13.483658 | orchestrator | 2026-04-04 01:03:13.483665 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-04 01:03:13.483672 | orchestrator | Saturday 04 April 2026 01:00:36 +0000 (0:00:04.280) 0:00:05.716 ******** 2026-04-04 01:03:13.483678 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-04 01:03:13.483685 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-04 01:03:13.483692 | orchestrator | 2026-04-04 01:03:13.483698 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-04 01:03:13.483742 | orchestrator | Saturday 04 April 2026 01:00:42 +0000 (0:00:06.643) 0:00:12.360 ******** 2026-04-04 01:03:13.483750 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:03:13.483757 | orchestrator | 2026-04-04 01:03:13.483763 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-04 01:03:13.483770 | orchestrator | Saturday 04 April 2026 01:00:46 +0000 (0:00:03.729) 0:00:16.090 ******** 2026-04-04 01:03:13.483776 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-04 01:03:13.483782 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:03:13.483788 | orchestrator | 2026-04-04 01:03:13.483794 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-04 01:03:13.483800 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:04.437) 0:00:20.527 ******** 2026-04-04 01:03:13.483806 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:03:13.483812 | orchestrator | 2026-04-04 01:03:13.483819 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-04 01:03:13.483825 | orchestrator | Saturday 04 April 2026 01:00:54 +0000 (0:00:03.786) 0:00:24.314 ******** 2026-04-04 01:03:13.483831 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-04 01:03:13.483837 | orchestrator | 2026-04-04 01:03:13.483843 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-04 01:03:13.483850 | orchestrator | Saturday 04 April 2026 01:00:58 +0000 (0:00:03.784) 0:00:28.099 ******** 2026-04-04 01:03:13.484208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484602 | orchestrator | 2026-04-04 01:03:13.484608 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-04 01:03:13.484614 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:04.705) 0:00:32.805 ******** 2026-04-04 01:03:13.484620 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.484626 | orchestrator | 2026-04-04 01:03:13.484631 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-04 01:03:13.484637 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:00.094) 0:00:32.899 ******** 2026-04-04 01:03:13.484642 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.484648 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.484653 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.484658 | orchestrator | 2026-04-04 01:03:13.484664 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:03:13.484674 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:00.242) 0:00:33.142 ******** 2026-04-04 01:03:13.484679 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:03:13.484708 | orchestrator | 2026-04-04 01:03:13.484715 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-04 01:03:13.484722 | orchestrator | Saturday 04 April 2026 01:01:04 +0000 (0:00:01.146) 0:00:34.288 ******** 2026-04-04 01:03:13.484732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.484766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.484949 | orchestrator | 2026-04-04 01:03:13.484955 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-04 01:03:13.484961 | orchestrator | Saturday 04 April 2026 01:01:12 +0000 (0:00:07.666) 0:00:41.955 ******** 2026-04-04 01:03:13.484968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.484977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.484999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485029 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.485036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.485068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485120 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.485127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.485163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485194 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.485200 | orchestrator | 2026-04-04 01:03:13.485206 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-04 01:03:13.485212 | orchestrator | Saturday 04 April 2026 01:01:13 +0000 (0:00:01.354) 0:00:43.309 ******** 2026-04-04 01:03:13.485219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.485249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485280 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.485286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.485316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485347 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.485353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.485369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485413 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.485419 | orchestrator | 2026-04-04 01:03:13.485425 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-04 01:03:13.485431 | orchestrator | Saturday 04 April 2026 01:01:15 +0000 (0:00:01.176) 0:00:44.486 ******** 2026-04-04 01:03:13.485438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485633 | orchestrator | 2026-04-04 01:03:13.485639 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-04 01:03:13.485646 | orchestrator | Saturday 04 April 2026 01:01:21 +0000 (0:00:06.344) 0:00:50.830 ******** 2026-04-04 01:03:13.485653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.485686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485812 | orchestrator | 2026-04-04 01:03:13.485819 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-04 01:03:13.485826 | orchestrator | Saturday 04 April 2026 01:01:39 +0000 (0:00:18.440) 0:01:09.271 ******** 2026-04-04 01:03:13.485832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:03:13.485839 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:03:13.485846 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-04 01:03:13.485852 | orchestrator | 2026-04-04 01:03:13.485858 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-04 01:03:13.485865 | orchestrator | Saturday 04 April 2026 01:01:45 +0000 (0:00:05.501) 0:01:14.773 ******** 2026-04-04 01:03:13.485871 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:03:13.485878 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:03:13.485884 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-04 01:03:13.485890 | orchestrator | 2026-04-04 01:03:13.485897 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-04 01:03:13.485904 | orchestrator | Saturday 04 April 2026 01:01:49 +0000 (0:00:04.235) 0:01:19.008 ******** 2026-04-04 01:03:13.485911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.485942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.485979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.485995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486151 | orchestrator | 2026-04-04 01:03:13.486157 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-04 01:03:13.486163 | orchestrator | Saturday 04 April 2026 01:01:53 +0000 (0:00:03.784) 0:01:22.792 ******** 2026-04-04 01:03:13.486170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486313 | orchestrator | 2026-04-04 01:03:13.486319 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:03:13.486325 | orchestrator | Saturday 04 April 2026 01:01:56 +0000 (0:00:03.019) 0:01:25.812 ******** 2026-04-04 01:03:13.486331 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.486338 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.486344 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.486350 | orchestrator | 2026-04-04 01:03:13.486356 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-04 01:03:13.486362 | orchestrator | Saturday 04 April 2026 01:01:56 +0000 (0:00:00.357) 0:01:26.169 ******** 2026-04-04 01:03:13.486369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.486385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486418 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.486425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.486442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486475 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.486481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-04 01:03:13.486491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-04 01:03:13.486498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:03:13.486529 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.486535 | orchestrator | 2026-04-04 01:03:13.486542 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-04 01:03:13.486548 | orchestrator | Saturday 04 April 2026 01:01:58 +0000 (0:00:01.439) 0:01:27.609 ******** 2026-04-04 01:03:13.486555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.486565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.486572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-04 01:03:13.486581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:03:13.486704 | orchestrator | 2026-04-04 01:03:13.486710 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-04 01:03:13.486716 | orchestrator | Saturday 04 April 2026 01:02:02 +0000 (0:00:04.629) 0:01:32.239 ******** 2026-04-04 01:03:13.486727 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:03:13.486733 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:03:13.486739 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:03:13.486745 | orchestrator | 2026-04-04 01:03:13.486751 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-04 01:03:13.486757 | orchestrator | Saturday 04 April 2026 01:02:03 +0000 (0:00:00.321) 0:01:32.560 ******** 2026-04-04 01:03:13.486763 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-04 01:03:13.486769 | orchestrator | 2026-04-04 01:03:13.486775 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-04 01:03:13.486781 | orchestrator | Saturday 04 April 2026 01:02:05 +0000 (0:00:02.301) 0:01:34.862 ******** 2026-04-04 01:03:13.486786 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:03:13.486791 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-04 01:03:13.486796 | orchestrator | 2026-04-04 01:03:13.486802 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-04 01:03:13.486809 | orchestrator | Saturday 04 April 2026 01:02:07 +0000 (0:00:02.236) 0:01:37.099 ******** 2026-04-04 01:03:13.486816 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.486822 | orchestrator | 2026-04-04 01:03:13.486828 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:03:13.486834 | orchestrator | Saturday 04 April 2026 01:02:22 +0000 (0:00:14.938) 0:01:52.037 ******** 2026-04-04 01:03:13.486840 | orchestrator | 2026-04-04 01:03:13.486846 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:03:13.486852 | orchestrator | Saturday 04 April 2026 01:02:22 +0000 (0:00:00.059) 0:01:52.097 ******** 2026-04-04 01:03:13.486858 | orchestrator | 2026-04-04 01:03:13.486864 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-04 01:03:13.486870 | orchestrator | Saturday 04 April 2026 01:02:22 +0000 (0:00:00.060) 0:01:52.158 ******** 2026-04-04 01:03:13.486877 | orchestrator | 2026-04-04 01:03:13.486883 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-04 01:03:13.486889 | orchestrator | Saturday 04 April 2026 01:02:22 +0000 (0:00:00.066) 0:01:52.224 ******** 2026-04-04 01:03:13.486895 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.486901 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.486908 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.486914 | orchestrator | 2026-04-04 01:03:13.486920 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-04 01:03:13.486927 | orchestrator | Saturday 04 April 2026 01:02:29 +0000 (0:00:06.911) 0:01:59.136 ******** 2026-04-04 01:03:13.486933 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.486939 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.486946 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.486951 | orchestrator | 2026-04-04 01:03:13.486957 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-04 01:03:13.486963 | orchestrator | Saturday 04 April 2026 01:02:35 +0000 (0:00:05.643) 0:02:04.780 ******** 2026-04-04 01:03:13.486970 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.486976 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.486982 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.486989 | orchestrator | 2026-04-04 01:03:13.486995 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-04 01:03:13.487001 | orchestrator | Saturday 04 April 2026 01:02:41 +0000 (0:00:05.942) 0:02:10.722 ******** 2026-04-04 01:03:13.487007 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.487013 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.487019 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.487025 | orchestrator | 2026-04-04 01:03:13.487032 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-04 01:03:13.487038 | orchestrator | Saturday 04 April 2026 01:02:51 +0000 (0:00:09.824) 0:02:20.547 ******** 2026-04-04 01:03:13.487049 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.487055 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.487061 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.487067 | orchestrator | 2026-04-04 01:03:13.487108 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-04 01:03:13.487116 | orchestrator | Saturday 04 April 2026 01:02:57 +0000 (0:00:06.566) 0:02:27.114 ******** 2026-04-04 01:03:13.487123 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.487129 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:03:13.487135 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:03:13.487142 | orchestrator | 2026-04-04 01:03:13.487148 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-04 01:03:13.487154 | orchestrator | Saturday 04 April 2026 01:03:03 +0000 (0:00:05.863) 0:02:32.978 ******** 2026-04-04 01:03:13.487161 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:03:13.487167 | orchestrator | 2026-04-04 01:03:13.487174 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:03:13.487180 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:03:13.487192 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:03:13.487198 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:03:13.487205 | orchestrator | 2026-04-04 01:03:13.487211 | orchestrator | 2026-04-04 01:03:13.487223 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:03:13.487229 | orchestrator | Saturday 04 April 2026 01:03:10 +0000 (0:00:07.112) 0:02:40.090 ******** 2026-04-04 01:03:13.487235 | orchestrator | =============================================================================== 2026-04-04 01:03:13.487241 | orchestrator | designate : Copying over designate.conf -------------------------------- 18.44s 2026-04-04 01:03:13.487248 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.94s 2026-04-04 01:03:13.487254 | orchestrator | designate : Restart designate-producer container ------------------------ 9.82s 2026-04-04 01:03:13.487261 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.67s 2026-04-04 01:03:13.487267 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.11s 2026-04-04 01:03:13.487273 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 6.91s 2026-04-04 01:03:13.487280 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.64s 2026-04-04 01:03:13.487286 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.57s 2026-04-04 01:03:13.487292 | orchestrator | designate : Copying over config.json files for services ----------------- 6.34s 2026-04-04 01:03:13.487298 | orchestrator | designate : Restart designate-central container ------------------------- 5.94s 2026-04-04 01:03:13.487304 | orchestrator | designate : Restart designate-worker container -------------------------- 5.86s 2026-04-04 01:03:13.487310 | orchestrator | designate : Restart designate-api container ----------------------------- 5.64s 2026-04-04 01:03:13.487316 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.50s 2026-04-04 01:03:13.487323 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.71s 2026-04-04 01:03:13.487329 | orchestrator | designate : Check designate containers ---------------------------------- 4.63s 2026-04-04 01:03:13.487335 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.44s 2026-04-04 01:03:13.487341 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.28s 2026-04-04 01:03:13.487348 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.24s 2026-04-04 01:03:13.487354 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.79s 2026-04-04 01:03:13.487365 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.78s 2026-04-04 01:03:13.487370 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:13.487377 | orchestrator | 2026-04-04 01:03:13 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:13.487383 | orchestrator | 2026-04-04 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:16.512472 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:16.513246 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:16.513572 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:16.514197 | orchestrator | 2026-04-04 01:03:16 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:16.514206 | orchestrator | 2026-04-04 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:19.540771 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:19.541523 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:19.542123 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:19.542895 | orchestrator | 2026-04-04 01:03:19 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:19.542920 | orchestrator | 2026-04-04 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:22.573768 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:22.575281 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:22.575992 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:22.577585 | orchestrator | 2026-04-04 01:03:22 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:22.577627 | orchestrator | 2026-04-04 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:25.608987 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:25.609259 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:25.610174 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:25.611817 | orchestrator | 2026-04-04 01:03:25 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:25.611860 | orchestrator | 2026-04-04 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:28.640333 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:28.640520 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:28.641029 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:28.642231 | orchestrator | 2026-04-04 01:03:28 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:28.642259 | orchestrator | 2026-04-04 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:31.673609 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:31.673846 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:31.675534 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:31.677386 | orchestrator | 2026-04-04 01:03:31 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:31.677431 | orchestrator | 2026-04-04 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:34.704466 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:34.705109 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:34.705793 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:34.707219 | orchestrator | 2026-04-04 01:03:34 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:34.707253 | orchestrator | 2026-04-04 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:37.738807 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:37.740975 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:37.741403 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:37.742326 | orchestrator | 2026-04-04 01:03:37 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:37.742386 | orchestrator | 2026-04-04 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:40.783928 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:40.786444 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:40.788442 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:40.790502 | orchestrator | 2026-04-04 01:03:40 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:40.790531 | orchestrator | 2026-04-04 01:03:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:43.832153 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:43.832337 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:43.833281 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state STARTED 2026-04-04 01:03:43.834430 | orchestrator | 2026-04-04 01:03:43 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:43.834457 | orchestrator | 2026-04-04 01:03:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:46.869339 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:46.869413 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:46.870842 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task adbbf6ba-76e5-44b5-8636-3476f076ee84 is in state SUCCESS 2026-04-04 01:03:46.871638 | orchestrator | 2026-04-04 01:03:46 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:46.871682 | orchestrator | 2026-04-04 01:03:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:49.906650 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:49.908569 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:49.909927 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:03:49.910349 | orchestrator | 2026-04-04 01:03:49 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:49.910381 | orchestrator | 2026-04-04 01:03:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:52.980804 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:52.980866 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:52.980874 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:03:52.980880 | orchestrator | 2026-04-04 01:03:52 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:52.980886 | orchestrator | 2026-04-04 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:55.984486 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:55.987236 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:55.989735 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:03:55.991763 | orchestrator | 2026-04-04 01:03:55 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:55.991821 | orchestrator | 2026-04-04 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:03:59.040160 | orchestrator | 2026-04-04 01:03:59 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:03:59.041922 | orchestrator | 2026-04-04 01:03:59 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:03:59.043142 | orchestrator | 2026-04-04 01:03:59 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:03:59.045231 | orchestrator | 2026-04-04 01:03:59 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:03:59.045561 | orchestrator | 2026-04-04 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:02.085333 | orchestrator | 2026-04-04 01:04:02 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:02.086867 | orchestrator | 2026-04-04 01:04:02 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:04:02.087804 | orchestrator | 2026-04-04 01:04:02 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:02.088861 | orchestrator | 2026-04-04 01:04:02 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:02.088898 | orchestrator | 2026-04-04 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:05.120662 | orchestrator | 2026-04-04 01:04:05 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:05.121524 | orchestrator | 2026-04-04 01:04:05 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:04:05.122389 | orchestrator | 2026-04-04 01:04:05 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:05.123307 | orchestrator | 2026-04-04 01:04:05 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:05.123339 | orchestrator | 2026-04-04 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:08.165982 | orchestrator | 2026-04-04 01:04:08 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:08.167969 | orchestrator | 2026-04-04 01:04:08 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state STARTED 2026-04-04 01:04:08.170578 | orchestrator | 2026-04-04 01:04:08 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:08.172897 | orchestrator | 2026-04-04 01:04:08 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:08.173089 | orchestrator | 2026-04-04 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:11.220962 | orchestrator | 2026-04-04 01:04:11 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:11.222159 | orchestrator | 2026-04-04 01:04:11 | INFO  | Task cbfcee13-3858-4dd7-bd5d-0a00f012f8a2 is in state SUCCESS 2026-04-04 01:04:11.223397 | orchestrator | 2026-04-04 01:04:11.223429 | orchestrator | 2026-04-04 01:04:11.223434 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-04-04 01:04:11.223438 | orchestrator | 2026-04-04 01:04:11.223441 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-04-04 01:04:11.223445 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:00.175) 0:00:00.175 ******** 2026-04-04 01:04:11.223448 | orchestrator | changed: [localhost] 2026-04-04 01:04:11.223452 | orchestrator | 2026-04-04 01:04:11.223455 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-04-04 01:04:11.223458 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:00.670) 0:00:00.845 ******** 2026-04-04 01:04:11.223461 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-04-04 01:04:11.223465 | orchestrator | changed: [localhost] 2026-04-04 01:04:11.223468 | orchestrator | 2026-04-04 01:04:11.223471 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-04-04 01:04:11.223474 | orchestrator | Saturday 04 April 2026 01:03:17 +0000 (0:01:12.473) 0:01:13.319 ******** 2026-04-04 01:04:11.223477 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-04-04 01:04:11.223481 | orchestrator | changed: [localhost] 2026-04-04 01:04:11.223484 | orchestrator | 2026-04-04 01:04:11.223487 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:04:11.223490 | orchestrator | 2026-04-04 01:04:11.223493 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:04:11.223496 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:27.517) 0:01:40.836 ******** 2026-04-04 01:04:11.223499 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:11.223502 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:11.223505 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:11.223508 | orchestrator | 2026-04-04 01:04:11.223511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:04:11.223514 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.370) 0:01:41.206 ******** 2026-04-04 01:04:11.223518 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-04-04 01:04:11.223521 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-04-04 01:04:11.223524 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-04-04 01:04:11.223527 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-04-04 01:04:11.223541 | orchestrator | 2026-04-04 01:04:11.223544 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-04-04 01:04:11.223547 | orchestrator | skipping: no hosts matched 2026-04-04 01:04:11.223551 | orchestrator | 2026-04-04 01:04:11.223554 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:04:11.223557 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:04:11.223561 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:04:11.223564 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:04:11.223568 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:04:11.223571 | orchestrator | 2026-04-04 01:04:11.223574 | orchestrator | 2026-04-04 01:04:11.223577 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:04:11.223580 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.487) 0:01:41.694 ******** 2026-04-04 01:04:11.223639 | orchestrator | =============================================================================== 2026-04-04 01:04:11.223644 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 72.47s 2026-04-04 01:04:11.223647 | orchestrator | Download ironic-agent kernel ------------------------------------------- 27.52s 2026-04-04 01:04:11.223651 | orchestrator | Ensure the destination directory exists --------------------------------- 0.67s 2026-04-04 01:04:11.223654 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2026-04-04 01:04:11.223657 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2026-04-04 01:04:11.223660 | orchestrator | 2026-04-04 01:04:11.223664 | orchestrator | 2026-04-04 01:04:11.223667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:04:11.223670 | orchestrator | 2026-04-04 01:04:11.223673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:04:11.223677 | orchestrator | Saturday 04 April 2026 01:00:05 +0000 (0:00:00.310) 0:00:00.310 ******** 2026-04-04 01:04:11.223680 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:11.223683 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:11.223686 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:11.223816 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:04:11.223823 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:04:11.223826 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:04:11.223830 | orchestrator | 2026-04-04 01:04:11.223905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:04:11.223910 | orchestrator | Saturday 04 April 2026 01:00:06 +0000 (0:00:00.619) 0:00:00.929 ******** 2026-04-04 01:04:11.223913 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-04 01:04:11.223916 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-04 01:04:11.223919 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-04 01:04:11.223922 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-04 01:04:11.223926 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-04 01:04:11.223929 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-04 01:04:11.223932 | orchestrator | 2026-04-04 01:04:11.223946 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-04 01:04:11.223950 | orchestrator | 2026-04-04 01:04:11.223953 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:04:11.223957 | orchestrator | Saturday 04 April 2026 01:00:07 +0000 (0:00:00.769) 0:00:01.699 ******** 2026-04-04 01:04:11.223960 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:04:11.223969 | orchestrator | 2026-04-04 01:04:11.223972 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-04 01:04:11.223976 | orchestrator | Saturday 04 April 2026 01:00:08 +0000 (0:00:01.039) 0:00:02.738 ******** 2026-04-04 01:04:11.223979 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:11.223982 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:04:11.224014 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:11.224022 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:11.224027 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:04:11.224032 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:04:11.224037 | orchestrator | 2026-04-04 01:04:11.224042 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-04 01:04:11.224047 | orchestrator | Saturday 04 April 2026 01:00:09 +0000 (0:00:01.352) 0:00:04.090 ******** 2026-04-04 01:04:11.224051 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:11.224057 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:11.224062 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:11.224067 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:04:11.224072 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:04:11.224077 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:04:11.224082 | orchestrator | 2026-04-04 01:04:11.224087 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-04 01:04:11.224092 | orchestrator | Saturday 04 April 2026 01:00:10 +0000 (0:00:01.242) 0:00:05.333 ******** 2026-04-04 01:04:11.224096 | orchestrator | ok: [testbed-node-0] => { 2026-04-04 01:04:11.224101 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224107 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224112 | orchestrator | } 2026-04-04 01:04:11.224118 | orchestrator | ok: [testbed-node-1] => { 2026-04-04 01:04:11.224122 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224126 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224129 | orchestrator | } 2026-04-04 01:04:11.224132 | orchestrator | ok: [testbed-node-2] => { 2026-04-04 01:04:11.224135 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224138 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224141 | orchestrator | } 2026-04-04 01:04:11.224144 | orchestrator | ok: [testbed-node-3] => { 2026-04-04 01:04:11.224147 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224150 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224154 | orchestrator | } 2026-04-04 01:04:11.224157 | orchestrator | ok: [testbed-node-4] => { 2026-04-04 01:04:11.224160 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224163 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224166 | orchestrator | } 2026-04-04 01:04:11.224169 | orchestrator | ok: [testbed-node-5] => { 2026-04-04 01:04:11.224172 | orchestrator |  "changed": false, 2026-04-04 01:04:11.224175 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:04:11.224178 | orchestrator | } 2026-04-04 01:04:11.224181 | orchestrator | 2026-04-04 01:04:11.224185 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-04 01:04:11.224188 | orchestrator | Saturday 04 April 2026 01:00:11 +0000 (0:00:00.563) 0:00:05.896 ******** 2026-04-04 01:04:11.224193 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224198 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224203 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224208 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224213 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224218 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224224 | orchestrator | 2026-04-04 01:04:11.224229 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-04 01:04:11.224234 | orchestrator | Saturday 04 April 2026 01:00:12 +0000 (0:00:00.677) 0:00:06.574 ******** 2026-04-04 01:04:11.224240 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-04 01:04:11.224245 | orchestrator | 2026-04-04 01:04:11.224251 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-04 01:04:11.224262 | orchestrator | Saturday 04 April 2026 01:00:15 +0000 (0:00:03.329) 0:00:09.904 ******** 2026-04-04 01:04:11.224266 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-04 01:04:11.224269 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-04 01:04:11.224272 | orchestrator | 2026-04-04 01:04:11.224276 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-04 01:04:11.224279 | orchestrator | Saturday 04 April 2026 01:00:22 +0000 (0:00:06.933) 0:00:16.837 ******** 2026-04-04 01:04:11.224282 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:04:11.224285 | orchestrator | 2026-04-04 01:04:11.224288 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-04 01:04:11.224291 | orchestrator | Saturday 04 April 2026 01:00:26 +0000 (0:00:04.304) 0:00:21.142 ******** 2026-04-04 01:04:11.224298 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-04 01:04:11.224302 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:04:11.224305 | orchestrator | 2026-04-04 01:04:11.224308 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-04 01:04:11.224311 | orchestrator | Saturday 04 April 2026 01:00:30 +0000 (0:00:03.991) 0:00:25.134 ******** 2026-04-04 01:04:11.224314 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:04:11.224317 | orchestrator | 2026-04-04 01:04:11.224321 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-04 01:04:11.224324 | orchestrator | Saturday 04 April 2026 01:00:34 +0000 (0:00:03.577) 0:00:28.712 ******** 2026-04-04 01:04:11.224327 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-04 01:04:11.224330 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-04 01:04:11.224333 | orchestrator | 2026-04-04 01:04:11.224336 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:04:11.224362 | orchestrator | Saturday 04 April 2026 01:00:42 +0000 (0:00:07.990) 0:00:36.702 ******** 2026-04-04 01:04:11.224365 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224368 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224372 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224375 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224378 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224381 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224384 | orchestrator | 2026-04-04 01:04:11.224387 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-04 01:04:11.224390 | orchestrator | Saturday 04 April 2026 01:00:42 +0000 (0:00:00.481) 0:00:37.183 ******** 2026-04-04 01:04:11.224393 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224396 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224400 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224403 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224406 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224411 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224418 | orchestrator | 2026-04-04 01:04:11.224425 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-04 01:04:11.224431 | orchestrator | Saturday 04 April 2026 01:00:44 +0000 (0:00:01.810) 0:00:38.994 ******** 2026-04-04 01:04:11.224437 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:11.224442 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:11.224448 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:11.224454 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:04:11.224459 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:04:11.224464 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:04:11.224469 | orchestrator | 2026-04-04 01:04:11.224474 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-04 01:04:11.224480 | orchestrator | Saturday 04 April 2026 01:00:45 +0000 (0:00:01.073) 0:00:40.067 ******** 2026-04-04 01:04:11.224490 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224496 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224502 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224508 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224514 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224517 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224521 | orchestrator | 2026-04-04 01:04:11.224524 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-04 01:04:11.224527 | orchestrator | Saturday 04 April 2026 01:00:47 +0000 (0:00:01.847) 0:00:41.914 ******** 2026-04-04 01:04:11.224532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224582 | orchestrator | 2026-04-04 01:04:11.224586 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-04 01:04:11.224590 | orchestrator | Saturday 04 April 2026 01:00:49 +0000 (0:00:02.478) 0:00:44.393 ******** 2026-04-04 01:04:11.224593 | orchestrator | [WARNING]: Skipped 2026-04-04 01:04:11.224597 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-04 01:04:11.224601 | orchestrator | due to this access issue: 2026-04-04 01:04:11.224605 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-04 01:04:11.224609 | orchestrator | a directory 2026-04-04 01:04:11.224612 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:04:11.224616 | orchestrator | 2026-04-04 01:04:11.224620 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:04:11.224624 | orchestrator | Saturday 04 April 2026 01:00:50 +0000 (0:00:00.771) 0:00:45.165 ******** 2026-04-04 01:04:11.224627 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:04:11.224632 | orchestrator | 2026-04-04 01:04:11.224636 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-04 01:04:11.224639 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:01.204) 0:00:46.369 ******** 2026-04-04 01:04:11.224655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.224670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.224694 | orchestrator | 2026-04-04 01:04:11.224697 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-04 01:04:11.224701 | orchestrator | Saturday 04 April 2026 01:00:55 +0000 (0:00:03.532) 0:00:49.902 ******** 2026-04-04 01:04:11.224705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224709 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224716 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224723 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224731 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224749 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224755 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224758 | orchestrator | 2026-04-04 01:04:11.224761 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-04 01:04:11.224765 | orchestrator | Saturday 04 April 2026 01:00:57 +0000 (0:00:02.030) 0:00:51.932 ******** 2026-04-04 01:04:11.224768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224771 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224778 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224791 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224797 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224804 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224810 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224858 | orchestrator | 2026-04-04 01:04:11.224863 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-04 01:04:11.224867 | orchestrator | Saturday 04 April 2026 01:01:00 +0000 (0:00:03.538) 0:00:55.471 ******** 2026-04-04 01:04:11.224870 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224873 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224876 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224879 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224883 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224886 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224889 | orchestrator | 2026-04-04 01:04:11.224892 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-04 01:04:11.224899 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:02.061) 0:00:57.532 ******** 2026-04-04 01:04:11.224902 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224905 | orchestrator | 2026-04-04 01:04:11.224908 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-04 01:04:11.224912 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:00.234) 0:00:57.767 ******** 2026-04-04 01:04:11.224915 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224918 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224921 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224926 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.224929 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.224932 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224935 | orchestrator | 2026-04-04 01:04:11.224938 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-04 01:04:11.224942 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:00.459) 0:00:58.226 ******** 2026-04-04 01:04:11.224950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224954 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.224957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224960 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.224964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.224967 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.224970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.224976 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.224981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225018 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225031 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225034 | orchestrator | 2026-04-04 01:04:11.225037 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-04 01:04:11.225041 | orchestrator | Saturday 04 April 2026 01:01:06 +0000 (0:00:02.795) 0:01:01.022 ******** 2026-04-04 01:04:11.225044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225066 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225073 | orchestrator | 2026-04-04 01:04:11.225076 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-04 01:04:11.225079 | orchestrator | Saturday 04 April 2026 01:01:09 +0000 (0:00:02.953) 0:01:03.976 ******** 2026-04-04 01:04:11.225083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.225112 | orchestrator | 2026-04-04 01:04:11.225115 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-04 01:04:11.225118 | orchestrator | Saturday 04 April 2026 01:01:15 +0000 (0:00:06.329) 0:01:10.305 ******** 2026-04-04 01:04:11.225123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225127 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225137 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225145 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225159 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225170 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225182 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225188 | orchestrator | 2026-04-04 01:04:11.225194 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-04 01:04:11.225199 | orchestrator | Saturday 04 April 2026 01:01:17 +0000 (0:00:02.167) 0:01:12.473 ******** 2026-04-04 01:04:11.225204 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225209 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225214 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225219 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:11.225228 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:11.225232 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:11.225236 | orchestrator | 2026-04-04 01:04:11.225239 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-04 01:04:11.225242 | orchestrator | Saturday 04 April 2026 01:01:20 +0000 (0:00:02.292) 0:01:14.765 ******** 2026-04-04 01:04:11.225245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225252 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225259 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225265 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.225286 | orchestrator | 2026-04-04 01:04:11.225289 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-04 01:04:11.225292 | orchestrator | Saturday 04 April 2026 01:01:23 +0000 (0:00:03.680) 0:01:18.445 ******** 2026-04-04 01:04:11.225295 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225298 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225302 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225305 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225308 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225311 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225314 | orchestrator | 2026-04-04 01:04:11.225317 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-04 01:04:11.225320 | orchestrator | Saturday 04 April 2026 01:01:26 +0000 (0:00:02.905) 0:01:21.351 ******** 2026-04-04 01:04:11.225323 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225327 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225330 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225333 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225336 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225339 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225342 | orchestrator | 2026-04-04 01:04:11.225345 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-04 01:04:11.225348 | orchestrator | Saturday 04 April 2026 01:01:29 +0000 (0:00:02.233) 0:01:23.584 ******** 2026-04-04 01:04:11.225352 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225355 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225358 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225361 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225364 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225368 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225371 | orchestrator | 2026-04-04 01:04:11.225374 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-04 01:04:11.225377 | orchestrator | Saturday 04 April 2026 01:01:31 +0000 (0:00:02.645) 0:01:26.229 ******** 2026-04-04 01:04:11.225380 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225383 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225386 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225389 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225392 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225395 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225399 | orchestrator | 2026-04-04 01:04:11.225402 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-04 01:04:11.225405 | orchestrator | Saturday 04 April 2026 01:01:34 +0000 (0:00:02.314) 0:01:28.544 ******** 2026-04-04 01:04:11.225408 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225412 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225415 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225418 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225421 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225424 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225427 | orchestrator | 2026-04-04 01:04:11.225430 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-04 01:04:11.225434 | orchestrator | Saturday 04 April 2026 01:01:36 +0000 (0:00:02.727) 0:01:31.271 ******** 2026-04-04 01:04:11.225437 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225440 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225446 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225454 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225462 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225467 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225472 | orchestrator | 2026-04-04 01:04:11.225478 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-04 01:04:11.225482 | orchestrator | Saturday 04 April 2026 01:01:40 +0000 (0:00:03.610) 0:01:34.881 ******** 2026-04-04 01:04:11.225485 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225489 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225492 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225495 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225498 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225502 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225508 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225513 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225516 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225520 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225524 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-04 01:04:11.225530 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225538 | orchestrator | 2026-04-04 01:04:11.225544 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-04 01:04:11.225549 | orchestrator | Saturday 04 April 2026 01:01:43 +0000 (0:00:02.836) 0:01:37.718 ******** 2026-04-04 01:04:11.225555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225560 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225571 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225587 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225606 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225617 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225628 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225633 | orchestrator | 2026-04-04 01:04:11.225639 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-04 01:04:11.225645 | orchestrator | Saturday 04 April 2026 01:01:45 +0000 (0:00:02.694) 0:01:40.412 ******** 2026-04-04 01:04:11.225651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225659 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.225743 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225751 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225763 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.225771 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225775 | orchestrator | 2026-04-04 01:04:11.225779 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-04 01:04:11.225784 | orchestrator | Saturday 04 April 2026 01:01:48 +0000 (0:00:02.767) 0:01:43.180 ******** 2026-04-04 01:04:11.225787 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225790 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225794 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225797 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225800 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225803 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225807 | orchestrator | 2026-04-04 01:04:11.225810 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-04 01:04:11.225813 | orchestrator | Saturday 04 April 2026 01:01:51 +0000 (0:00:02.715) 0:01:45.895 ******** 2026-04-04 01:04:11.225819 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225823 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225826 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225829 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:04:11.225833 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:04:11.225836 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:04:11.225839 | orchestrator | 2026-04-04 01:04:11.225843 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-04 01:04:11.225846 | orchestrator | Saturday 04 April 2026 01:01:55 +0000 (0:00:03.875) 0:01:49.771 ******** 2026-04-04 01:04:11.225849 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225853 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225856 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225859 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225863 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225866 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225869 | orchestrator | 2026-04-04 01:04:11.225873 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-04 01:04:11.225879 | orchestrator | Saturday 04 April 2026 01:01:58 +0000 (0:00:03.070) 0:01:52.841 ******** 2026-04-04 01:04:11.225882 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225885 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225889 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225892 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225895 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225898 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225901 | orchestrator | 2026-04-04 01:04:11.225904 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-04 01:04:11.225908 | orchestrator | Saturday 04 April 2026 01:02:00 +0000 (0:00:02.519) 0:01:55.361 ******** 2026-04-04 01:04:11.225911 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225914 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225917 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225920 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225923 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225929 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225932 | orchestrator | 2026-04-04 01:04:11.225936 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-04 01:04:11.225939 | orchestrator | Saturday 04 April 2026 01:02:02 +0000 (0:00:02.006) 0:01:57.367 ******** 2026-04-04 01:04:11.225943 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225946 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225949 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.225952 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.225955 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225959 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.225962 | orchestrator | 2026-04-04 01:04:11.225965 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-04 01:04:11.225968 | orchestrator | Saturday 04 April 2026 01:02:04 +0000 (0:00:01.842) 0:01:59.209 ******** 2026-04-04 01:04:11.225972 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.225975 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.225978 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.225981 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226084 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226094 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226098 | orchestrator | 2026-04-04 01:04:11.226101 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-04 01:04:11.226104 | orchestrator | Saturday 04 April 2026 01:02:06 +0000 (0:00:01.881) 0:02:01.091 ******** 2026-04-04 01:04:11.226107 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.226110 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226114 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.226117 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.226120 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226123 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226126 | orchestrator | 2026-04-04 01:04:11.226129 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-04 01:04:11.226133 | orchestrator | Saturday 04 April 2026 01:02:08 +0000 (0:00:01.591) 0:02:02.683 ******** 2026-04-04 01:04:11.226137 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.226140 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.226143 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.226146 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226149 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226152 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226155 | orchestrator | 2026-04-04 01:04:11.226158 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-04 01:04:11.226161 | orchestrator | Saturday 04 April 2026 01:02:09 +0000 (0:00:01.806) 0:02:04.489 ******** 2026-04-04 01:04:11.226165 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226168 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.226171 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226174 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.226178 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226181 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226184 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226187 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.226190 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226193 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226196 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-04 01:04:11.226203 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226206 | orchestrator | 2026-04-04 01:04:11.226209 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-04 01:04:11.226216 | orchestrator | Saturday 04 April 2026 01:02:12 +0000 (0:00:02.285) 0:02:06.775 ******** 2026-04-04 01:04:11.226223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.226227 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.226230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.226234 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.226237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-04 01:04:11.226240 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.226247 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.226258 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.226264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-04 01:04:11.226267 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226271 | orchestrator | 2026-04-04 01:04:11.226274 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-04 01:04:11.226277 | orchestrator | Saturday 04 April 2026 01:02:14 +0000 (0:00:01.978) 0:02:08.753 ******** 2026-04-04 01:04:11.226281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.226284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.226288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.226297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-04 01:04:11.226303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.226307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-04 01:04:11.226310 | orchestrator | 2026-04-04 01:04:11.226314 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-04 01:04:11.226317 | orchestrator | Saturday 04 April 2026 01:02:16 +0000 (0:00:02.390) 0:02:11.144 ******** 2026-04-04 01:04:11.226320 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:11.226323 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:11.226326 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:11.226329 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:04:11.226332 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:04:11.226335 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:04:11.226339 | orchestrator | 2026-04-04 01:04:11.226342 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-04 01:04:11.226345 | orchestrator | Saturday 04 April 2026 01:02:17 +0000 (0:00:00.576) 0:02:11.721 ******** 2026-04-04 01:04:11.226349 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:11.226352 | orchestrator | 2026-04-04 01:04:11.226355 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-04 01:04:11.226358 | orchestrator | Saturday 04 April 2026 01:02:19 +0000 (0:00:02.081) 0:02:13.802 ******** 2026-04-04 01:04:11.226363 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:11.226366 | orchestrator | 2026-04-04 01:04:11.226370 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-04 01:04:11.226373 | orchestrator | Saturday 04 April 2026 01:02:21 +0000 (0:00:02.195) 0:02:15.998 ******** 2026-04-04 01:04:11.226376 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:11.226379 | orchestrator | 2026-04-04 01:04:11.226382 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226385 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:39.108) 0:02:55.106 ******** 2026-04-04 01:04:11.226388 | orchestrator | 2026-04-04 01:04:11.226392 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226395 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.059) 0:02:55.166 ******** 2026-04-04 01:04:11.226398 | orchestrator | 2026-04-04 01:04:11.226401 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226404 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.060) 0:02:55.227 ******** 2026-04-04 01:04:11.226407 | orchestrator | 2026-04-04 01:04:11.226410 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226413 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.094) 0:02:55.321 ******** 2026-04-04 01:04:11.226417 | orchestrator | 2026-04-04 01:04:11.226420 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226423 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.065) 0:02:55.387 ******** 2026-04-04 01:04:11.226426 | orchestrator | 2026-04-04 01:04:11.226429 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-04 01:04:11.226433 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.075) 0:02:55.462 ******** 2026-04-04 01:04:11.226436 | orchestrator | 2026-04-04 01:04:11.226440 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-04 01:04:11.226444 | orchestrator | Saturday 04 April 2026 01:03:01 +0000 (0:00:00.139) 0:02:55.602 ******** 2026-04-04 01:04:11.226447 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:11.226450 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:11.226453 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:11.226456 | orchestrator | 2026-04-04 01:04:11.226459 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-04 01:04:11.226463 | orchestrator | Saturday 04 April 2026 01:03:21 +0000 (0:00:20.043) 0:03:15.645 ******** 2026-04-04 01:04:11.226466 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:04:11.226469 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:04:11.226472 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:04:11.226475 | orchestrator | 2026-04-04 01:04:11.226478 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:04:11.226484 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:04:11.226488 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-04 01:04:11.226491 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-04 01:04:11.226494 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:04:11.226497 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:04:11.226500 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-04 01:04:11.226506 | orchestrator | 2026-04-04 01:04:11.226510 | orchestrator | 2026-04-04 01:04:11.226513 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:04:11.226516 | orchestrator | Saturday 04 April 2026 01:04:10 +0000 (0:00:48.998) 0:04:04.643 ******** 2026-04-04 01:04:11.226519 | orchestrator | =============================================================================== 2026-04-04 01:04:11.226522 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 49.00s 2026-04-04 01:04:11.226525 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.11s 2026-04-04 01:04:11.226528 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.04s 2026-04-04 01:04:11.226532 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.99s 2026-04-04 01:04:11.226535 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.93s 2026-04-04 01:04:11.226538 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.33s 2026-04-04 01:04:11.226541 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 4.30s 2026-04-04 01:04:11.226544 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.99s 2026-04-04 01:04:11.226547 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.88s 2026-04-04 01:04:11.226550 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.68s 2026-04-04 01:04:11.226554 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.61s 2026-04-04 01:04:11.226557 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.58s 2026-04-04 01:04:11.226560 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.54s 2026-04-04 01:04:11.226563 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.53s 2026-04-04 01:04:11.226566 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.33s 2026-04-04 01:04:11.226569 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.07s 2026-04-04 01:04:11.226572 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.95s 2026-04-04 01:04:11.226575 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 2.91s 2026-04-04 01:04:11.226578 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 2.84s 2026-04-04 01:04:11.226581 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.80s 2026-04-04 01:04:11.226585 | orchestrator | 2026-04-04 01:04:11 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:11.226588 | orchestrator | 2026-04-04 01:04:11 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:11.226591 | orchestrator | 2026-04-04 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:14.268276 | orchestrator | 2026-04-04 01:04:14 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:14.269277 | orchestrator | 2026-04-04 01:04:14 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:14.269319 | orchestrator | 2026-04-04 01:04:14 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:14.272952 | orchestrator | 2026-04-04 01:04:14 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:14.273075 | orchestrator | 2026-04-04 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:17.312233 | orchestrator | 2026-04-04 01:04:17 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:17.312832 | orchestrator | 2026-04-04 01:04:17 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:17.314667 | orchestrator | 2026-04-04 01:04:17 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:17.315495 | orchestrator | 2026-04-04 01:04:17 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:17.315520 | orchestrator | 2026-04-04 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:20.351249 | orchestrator | 2026-04-04 01:04:20 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state STARTED 2026-04-04 01:04:20.351816 | orchestrator | 2026-04-04 01:04:20 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:20.353847 | orchestrator | 2026-04-04 01:04:20 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:20.354249 | orchestrator | 2026-04-04 01:04:20 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:20.354466 | orchestrator | 2026-04-04 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:23.389109 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task e063d728-5f8b-4beb-b7fb-f152668ad412 is in state STARTED 2026-04-04 01:04:23.389225 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task d1cbedb3-f8b7-4023-8e3c-2d991460c4f3 is in state SUCCESS 2026-04-04 01:04:23.389938 | orchestrator | 2026-04-04 01:04:23.389992 | orchestrator | 2026-04-04 01:04:23.390002 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:04:23.390008 | orchestrator | 2026-04-04 01:04:23.390038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:04:23.390045 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:00.520) 0:00:00.520 ******** 2026-04-04 01:04:23.390051 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:04:23.390057 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:04:23.390062 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:04:23.390067 | orchestrator | 2026-04-04 01:04:23.390073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:04:23.390079 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:00.297) 0:00:00.818 ******** 2026-04-04 01:04:23.390084 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-04 01:04:23.390090 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-04 01:04:23.390094 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-04 01:04:23.390097 | orchestrator | 2026-04-04 01:04:23.390101 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-04 01:04:23.390104 | orchestrator | 2026-04-04 01:04:23.390107 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:04:23.390111 | orchestrator | Saturday 04 April 2026 01:03:15 +0000 (0:00:00.281) 0:00:01.099 ******** 2026-04-04 01:04:23.390114 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:04:23.390118 | orchestrator | 2026-04-04 01:04:23.390121 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-04 01:04:23.390124 | orchestrator | Saturday 04 April 2026 01:03:16 +0000 (0:00:00.522) 0:00:01.622 ******** 2026-04-04 01:04:23.390127 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-04 01:04:23.390130 | orchestrator | 2026-04-04 01:04:23.390134 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-04 01:04:23.390137 | orchestrator | Saturday 04 April 2026 01:03:19 +0000 (0:00:03.342) 0:00:04.964 ******** 2026-04-04 01:04:23.390140 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-04 01:04:23.390143 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-04 01:04:23.390146 | orchestrator | 2026-04-04 01:04:23.390149 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-04 01:04:23.390152 | orchestrator | Saturday 04 April 2026 01:03:25 +0000 (0:00:06.179) 0:00:11.143 ******** 2026-04-04 01:04:23.390171 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:04:23.390179 | orchestrator | 2026-04-04 01:04:23.390184 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-04 01:04:23.390189 | orchestrator | Saturday 04 April 2026 01:03:28 +0000 (0:00:02.988) 0:00:14.132 ******** 2026-04-04 01:04:23.390194 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-04 01:04:23.390199 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:04:23.390203 | orchestrator | 2026-04-04 01:04:23.390208 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-04 01:04:23.390213 | orchestrator | Saturday 04 April 2026 01:03:32 +0000 (0:00:03.908) 0:00:18.041 ******** 2026-04-04 01:04:23.390217 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:04:23.390222 | orchestrator | 2026-04-04 01:04:23.390228 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-04 01:04:23.390240 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:03.392) 0:00:21.433 ******** 2026-04-04 01:04:23.390245 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-04 01:04:23.390250 | orchestrator | 2026-04-04 01:04:23.390254 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:04:23.390259 | orchestrator | Saturday 04 April 2026 01:03:39 +0000 (0:00:03.206) 0:00:24.639 ******** 2026-04-04 01:04:23.390264 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390268 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:23.390273 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:23.390277 | orchestrator | 2026-04-04 01:04:23.390282 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-04 01:04:23.390287 | orchestrator | Saturday 04 April 2026 01:03:39 +0000 (0:00:00.284) 0:00:24.924 ******** 2026-04-04 01:04:23.390293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390327 | orchestrator | 2026-04-04 01:04:23.390333 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-04 01:04:23.390339 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:01.459) 0:00:26.383 ******** 2026-04-04 01:04:23.390344 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390350 | orchestrator | 2026-04-04 01:04:23.390355 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-04 01:04:23.390360 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.124) 0:00:26.508 ******** 2026-04-04 01:04:23.390365 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390368 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:23.390371 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:23.390374 | orchestrator | 2026-04-04 01:04:23.390377 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-04 01:04:23.390381 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.262) 0:00:26.770 ******** 2026-04-04 01:04:23.390384 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:04:23.390387 | orchestrator | 2026-04-04 01:04:23.390390 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-04 01:04:23.390396 | orchestrator | Saturday 04 April 2026 01:03:42 +0000 (0:00:00.650) 0:00:27.421 ******** 2026-04-04 01:04:23.390399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390416 | orchestrator | 2026-04-04 01:04:23.390420 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-04 01:04:23.390423 | orchestrator | Saturday 04 April 2026 01:03:43 +0000 (0:00:01.765) 0:00:29.186 ******** 2026-04-04 01:04:23.390426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390429 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390438 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:23.390443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390446 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:23.390450 | orchestrator | 2026-04-04 01:04:23.390453 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-04 01:04:23.390459 | orchestrator | Saturday 04 April 2026 01:03:44 +0000 (0:00:00.691) 0:00:29.878 ******** 2026-04-04 01:04:23.390462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390465 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390472 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:23.390477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390480 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:23.390483 | orchestrator | 2026-04-04 01:04:23.390486 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-04 01:04:23.390489 | orchestrator | Saturday 04 April 2026 01:03:45 +0000 (0:00:00.924) 0:00:30.802 ******** 2026-04-04 01:04:23.390494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390507 | orchestrator | 2026-04-04 01:04:23.390510 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-04 01:04:23.390513 | orchestrator | Saturday 04 April 2026 01:03:47 +0000 (0:00:01.573) 0:00:32.375 ******** 2026-04-04 01:04:23.390519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390535 | orchestrator | 2026-04-04 01:04:23.390538 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-04 01:04:23.390541 | orchestrator | Saturday 04 April 2026 01:03:49 +0000 (0:00:02.196) 0:00:34.572 ******** 2026-04-04 01:04:23.390544 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-04 01:04:23.390548 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-04 01:04:23.390552 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-04 01:04:23.390555 | orchestrator | 2026-04-04 01:04:23.390559 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-04 01:04:23.390562 | orchestrator | Saturday 04 April 2026 01:03:50 +0000 (0:00:01.546) 0:00:36.119 ******** 2026-04-04 01:04:23.390566 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:23.390570 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:23.390574 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:23.390578 | orchestrator | 2026-04-04 01:04:23.390581 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-04 01:04:23.390585 | orchestrator | Saturday 04 April 2026 01:03:52 +0000 (0:00:01.453) 0:00:37.573 ******** 2026-04-04 01:04:23.390589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390593 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:04:23.390599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390605 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:04:23.390611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-04 01:04:23.390616 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:04:23.390619 | orchestrator | 2026-04-04 01:04:23.390623 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-04 01:04:23.390626 | orchestrator | Saturday 04 April 2026 01:03:53 +0000 (0:00:01.247) 0:00:38.820 ******** 2026-04-04 01:04:23.390630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-04 01:04:23.390649 | orchestrator | 2026-04-04 01:04:23.390653 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-04 01:04:23.390656 | orchestrator | Saturday 04 April 2026 01:03:54 +0000 (0:00:01.207) 0:00:40.028 ******** 2026-04-04 01:04:23.390660 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:23.390664 | orchestrator | 2026-04-04 01:04:23.390667 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-04 01:04:23.390671 | orchestrator | Saturday 04 April 2026 01:03:56 +0000 (0:00:01.902) 0:00:41.930 ******** 2026-04-04 01:04:23.390675 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:23.390679 | orchestrator | 2026-04-04 01:04:23.390682 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-04 01:04:23.390686 | orchestrator | Saturday 04 April 2026 01:03:58 +0000 (0:00:02.196) 0:00:44.127 ******** 2026-04-04 01:04:23.390690 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:23.390694 | orchestrator | 2026-04-04 01:04:23.390697 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:04:23.390701 | orchestrator | Saturday 04 April 2026 01:04:11 +0000 (0:00:12.416) 0:00:56.544 ******** 2026-04-04 01:04:23.390705 | orchestrator | 2026-04-04 01:04:23.390709 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:04:23.390712 | orchestrator | Saturday 04 April 2026 01:04:11 +0000 (0:00:00.058) 0:00:56.602 ******** 2026-04-04 01:04:23.390716 | orchestrator | 2026-04-04 01:04:23.390722 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-04 01:04:23.390726 | orchestrator | Saturday 04 April 2026 01:04:11 +0000 (0:00:00.058) 0:00:56.660 ******** 2026-04-04 01:04:23.390729 | orchestrator | 2026-04-04 01:04:23.390733 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-04 01:04:23.390737 | orchestrator | Saturday 04 April 2026 01:04:11 +0000 (0:00:00.058) 0:00:56.719 ******** 2026-04-04 01:04:23.390740 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:04:23.390744 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:04:23.390748 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:04:23.390752 | orchestrator | 2026-04-04 01:04:23.390755 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:04:23.390759 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:04:23.390763 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:04:23.390767 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:04:23.390771 | orchestrator | 2026-04-04 01:04:23.390775 | orchestrator | 2026-04-04 01:04:23.390779 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:04:23.390783 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:09.357) 0:01:06.076 ******** 2026-04-04 01:04:23.390787 | orchestrator | =============================================================================== 2026-04-04 01:04:23.390790 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.42s 2026-04-04 01:04:23.390794 | orchestrator | placement : Restart placement-api container ----------------------------- 9.36s 2026-04-04 01:04:23.390798 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.18s 2026-04-04 01:04:23.390801 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.91s 2026-04-04 01:04:23.390804 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.39s 2026-04-04 01:04:23.390807 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.34s 2026-04-04 01:04:23.390813 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.21s 2026-04-04 01:04:23.390816 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.99s 2026-04-04 01:04:23.390819 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.20s 2026-04-04 01:04:23.390822 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.20s 2026-04-04 01:04:23.390825 | orchestrator | placement : Creating placement databases -------------------------------- 1.90s 2026-04-04 01:04:23.390828 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.77s 2026-04-04 01:04:23.390832 | orchestrator | placement : Copying over config.json files for services ----------------- 1.57s 2026-04-04 01:04:23.390835 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.55s 2026-04-04 01:04:23.390838 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.46s 2026-04-04 01:04:23.390841 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.45s 2026-04-04 01:04:23.390846 | orchestrator | placement : Copying over existing policy file --------------------------- 1.25s 2026-04-04 01:04:23.390850 | orchestrator | placement : Check placement containers ---------------------------------- 1.21s 2026-04-04 01:04:23.390853 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.92s 2026-04-04 01:04:23.390856 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.69s 2026-04-04 01:04:23.390859 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:23.391051 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:23.391749 | orchestrator | 2026-04-04 01:04:23 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:23.391769 | orchestrator | 2026-04-04 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:26.416076 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task e063d728-5f8b-4beb-b7fb-f152668ad412 is in state STARTED 2026-04-04 01:04:26.416652 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:26.417181 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:26.417974 | orchestrator | 2026-04-04 01:04:26 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:26.417998 | orchestrator | 2026-04-04 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:29.449941 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task e063d728-5f8b-4beb-b7fb-f152668ad412 is in state SUCCESS 2026-04-04 01:04:29.452930 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:29.454270 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:29.456060 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:29.458280 | orchestrator | 2026-04-04 01:04:29 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:29.458338 | orchestrator | 2026-04-04 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:32.493588 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:32.494587 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:32.498319 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:32.499372 | orchestrator | 2026-04-04 01:04:32 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:32.499409 | orchestrator | 2026-04-04 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:35.548022 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:35.548787 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:35.550051 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:35.551365 | orchestrator | 2026-04-04 01:04:35 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:35.551401 | orchestrator | 2026-04-04 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:38.596045 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:38.597053 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:38.597895 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:38.599338 | orchestrator | 2026-04-04 01:04:38 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:38.599384 | orchestrator | 2026-04-04 01:04:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:41.632334 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:41.632379 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:41.632886 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:41.633684 | orchestrator | 2026-04-04 01:04:41 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:41.633702 | orchestrator | 2026-04-04 01:04:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:44.678242 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:44.680515 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:44.683408 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:44.685207 | orchestrator | 2026-04-04 01:04:44 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:44.685250 | orchestrator | 2026-04-04 01:04:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:47.727652 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:47.729514 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:47.730937 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:47.732660 | orchestrator | 2026-04-04 01:04:47 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:47.732737 | orchestrator | 2026-04-04 01:04:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:50.781548 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:50.783919 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:50.785874 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:50.787869 | orchestrator | 2026-04-04 01:04:50 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:50.787920 | orchestrator | 2026-04-04 01:04:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:53.823479 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:53.823908 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:53.824658 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:53.825828 | orchestrator | 2026-04-04 01:04:53 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:53.825867 | orchestrator | 2026-04-04 01:04:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:56.851521 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:56.852332 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:56.852980 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:56.854050 | orchestrator | 2026-04-04 01:04:56 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:56.854175 | orchestrator | 2026-04-04 01:04:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:04:59.898539 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:04:59.900112 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:04:59.902463 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:04:59.903368 | orchestrator | 2026-04-04 01:04:59 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:04:59.904110 | orchestrator | 2026-04-04 01:04:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:02.962255 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:02.962319 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:02.964125 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:02.965702 | orchestrator | 2026-04-04 01:05:02 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:02.965752 | orchestrator | 2026-04-04 01:05:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:05.997235 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:05.997932 | orchestrator | 2026-04-04 01:05:05 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:05.998687 | orchestrator | 2026-04-04 01:05:06 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:05.999720 | orchestrator | 2026-04-04 01:05:06 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:05.999748 | orchestrator | 2026-04-04 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:09.046522 | orchestrator | 2026-04-04 01:05:09 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:09.047302 | orchestrator | 2026-04-04 01:05:09 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:09.049643 | orchestrator | 2026-04-04 01:05:09 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:09.050569 | orchestrator | 2026-04-04 01:05:09 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:09.050598 | orchestrator | 2026-04-04 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:12.100161 | orchestrator | 2026-04-04 01:05:12 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:12.102315 | orchestrator | 2026-04-04 01:05:12 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:12.103539 | orchestrator | 2026-04-04 01:05:12 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:12.105164 | orchestrator | 2026-04-04 01:05:12 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:12.105204 | orchestrator | 2026-04-04 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:15.153688 | orchestrator | 2026-04-04 01:05:15 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:15.155767 | orchestrator | 2026-04-04 01:05:15 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:15.157459 | orchestrator | 2026-04-04 01:05:15 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:15.159257 | orchestrator | 2026-04-04 01:05:15 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:15.159303 | orchestrator | 2026-04-04 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:18.207603 | orchestrator | 2026-04-04 01:05:18 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:18.208184 | orchestrator | 2026-04-04 01:05:18 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:18.208839 | orchestrator | 2026-04-04 01:05:18 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:18.210408 | orchestrator | 2026-04-04 01:05:18 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:18.210436 | orchestrator | 2026-04-04 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:21.239409 | orchestrator | 2026-04-04 01:05:21 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:21.239454 | orchestrator | 2026-04-04 01:05:21 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:21.239681 | orchestrator | 2026-04-04 01:05:21 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:21.240869 | orchestrator | 2026-04-04 01:05:21 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:21.240906 | orchestrator | 2026-04-04 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:24.281327 | orchestrator | 2026-04-04 01:05:24 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:24.283135 | orchestrator | 2026-04-04 01:05:24 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state STARTED 2026-04-04 01:05:24.285404 | orchestrator | 2026-04-04 01:05:24 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:24.287712 | orchestrator | 2026-04-04 01:05:24 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:24.288231 | orchestrator | 2026-04-04 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:27.327957 | orchestrator | 2026-04-04 01:05:27 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:27.330680 | orchestrator | 2026-04-04 01:05:27 | INFO  | Task 43cd200a-38dd-47b9-a5c1-0b4f9738c84a is in state SUCCESS 2026-04-04 01:05:27.331477 | orchestrator | 2026-04-04 01:05:27.331515 | orchestrator | 2026-04-04 01:05:27.331521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:05:27.331525 | orchestrator | 2026-04-04 01:05:27.331565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:05:27.331570 | orchestrator | Saturday 04 April 2026 01:04:25 +0000 (0:00:00.353) 0:00:00.353 ******** 2026-04-04 01:05:27.331574 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:27.331578 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:27.331581 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:27.331584 | orchestrator | 2026-04-04 01:05:27.331587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:05:27.331591 | orchestrator | Saturday 04 April 2026 01:04:26 +0000 (0:00:00.450) 0:00:00.804 ******** 2026-04-04 01:05:27.331594 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-04-04 01:05:27.331597 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-04-04 01:05:27.331600 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-04-04 01:05:27.331603 | orchestrator | 2026-04-04 01:05:27.331606 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-04-04 01:05:27.331639 | orchestrator | 2026-04-04 01:05:27.331642 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-04-04 01:05:27.331646 | orchestrator | Saturday 04 April 2026 01:04:26 +0000 (0:00:00.866) 0:00:01.670 ******** 2026-04-04 01:05:27.331649 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:27.331652 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:27.331655 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:27.331658 | orchestrator | 2026-04-04 01:05:27.331661 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:05:27.331675 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:27.331680 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:27.331683 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:05:27.331686 | orchestrator | 2026-04-04 01:05:27.331689 | orchestrator | 2026-04-04 01:05:27.331692 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:05:27.331695 | orchestrator | Saturday 04 April 2026 01:04:27 +0000 (0:00:00.980) 0:00:02.650 ******** 2026-04-04 01:05:27.331699 | orchestrator | =============================================================================== 2026-04-04 01:05:27.331702 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.98s 2026-04-04 01:05:27.331705 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2026-04-04 01:05:27.331708 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-04-04 01:05:27.331719 | orchestrator | 2026-04-04 01:05:27.331726 | orchestrator | 2026-04-04 01:05:27.331729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:05:27.331732 | orchestrator | 2026-04-04 01:05:27.331736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:05:27.331739 | orchestrator | Saturday 04 April 2026 01:03:48 +0000 (0:00:00.266) 0:00:00.266 ******** 2026-04-04 01:05:27.331754 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:27.331757 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:27.331782 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:27.331787 | orchestrator | 2026-04-04 01:05:27.331790 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:05:27.331793 | orchestrator | Saturday 04 April 2026 01:03:48 +0000 (0:00:00.263) 0:00:00.530 ******** 2026-04-04 01:05:27.331797 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-04 01:05:27.331800 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-04 01:05:27.331803 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-04 01:05:27.331807 | orchestrator | 2026-04-04 01:05:27.331812 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-04 01:05:27.331819 | orchestrator | 2026-04-04 01:05:27.331826 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:05:27.331831 | orchestrator | Saturday 04 April 2026 01:03:49 +0000 (0:00:00.257) 0:00:00.787 ******** 2026-04-04 01:05:27.331836 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:05:27.331841 | orchestrator | 2026-04-04 01:05:27.331847 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-04 01:05:27.331852 | orchestrator | Saturday 04 April 2026 01:03:49 +0000 (0:00:00.646) 0:00:01.433 ******** 2026-04-04 01:05:27.331857 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-04 01:05:27.331862 | orchestrator | 2026-04-04 01:05:27.331880 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-04 01:05:27.331885 | orchestrator | Saturday 04 April 2026 01:03:53 +0000 (0:00:03.854) 0:00:05.288 ******** 2026-04-04 01:05:27.331890 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-04 01:05:27.331896 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-04 01:05:27.331900 | orchestrator | 2026-04-04 01:05:27.331903 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-04 01:05:27.331913 | orchestrator | Saturday 04 April 2026 01:03:59 +0000 (0:00:05.892) 0:00:11.181 ******** 2026-04-04 01:05:27.331916 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:05:27.331920 | orchestrator | 2026-04-04 01:05:27.331923 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-04 01:05:27.331926 | orchestrator | Saturday 04 April 2026 01:04:02 +0000 (0:00:03.016) 0:00:14.197 ******** 2026-04-04 01:05:27.331937 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-04 01:05:27.331940 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:05:27.331943 | orchestrator | 2026-04-04 01:05:27.332021 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-04 01:05:27.332025 | orchestrator | Saturday 04 April 2026 01:04:06 +0000 (0:00:03.488) 0:00:17.686 ******** 2026-04-04 01:05:27.332028 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:05:27.332031 | orchestrator | 2026-04-04 01:05:27.332034 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-04 01:05:27.332038 | orchestrator | Saturday 04 April 2026 01:04:08 +0000 (0:00:02.875) 0:00:20.562 ******** 2026-04-04 01:05:27.332041 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-04 01:05:27.332044 | orchestrator | 2026-04-04 01:05:27.332047 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-04 01:05:27.332050 | orchestrator | Saturday 04 April 2026 01:04:12 +0000 (0:00:03.369) 0:00:23.932 ******** 2026-04-04 01:05:27.332053 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332056 | orchestrator | 2026-04-04 01:05:27.332060 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-04 01:05:27.332063 | orchestrator | Saturday 04 April 2026 01:04:15 +0000 (0:00:03.012) 0:00:26.944 ******** 2026-04-04 01:05:27.332071 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332074 | orchestrator | 2026-04-04 01:05:27.332077 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-04 01:05:27.332080 | orchestrator | Saturday 04 April 2026 01:04:18 +0000 (0:00:03.432) 0:00:30.377 ******** 2026-04-04 01:05:27.332083 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332086 | orchestrator | 2026-04-04 01:05:27.332089 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-04 01:05:27.332093 | orchestrator | Saturday 04 April 2026 01:04:22 +0000 (0:00:03.339) 0:00:33.716 ******** 2026-04-04 01:05:27.332098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332130 | orchestrator | 2026-04-04 01:05:27.332134 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-04 01:05:27.332137 | orchestrator | Saturday 04 April 2026 01:04:24 +0000 (0:00:02.225) 0:00:35.942 ******** 2026-04-04 01:05:27.332140 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332143 | orchestrator | 2026-04-04 01:05:27.332146 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-04 01:05:27.332149 | orchestrator | Saturday 04 April 2026 01:04:24 +0000 (0:00:00.199) 0:00:36.144 ******** 2026-04-04 01:05:27.332152 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332156 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:27.332159 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:27.332162 | orchestrator | 2026-04-04 01:05:27.332165 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-04 01:05:27.332168 | orchestrator | Saturday 04 April 2026 01:04:24 +0000 (0:00:00.439) 0:00:36.583 ******** 2026-04-04 01:05:27.332171 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:05:27.332174 | orchestrator | 2026-04-04 01:05:27.332177 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-04 01:05:27.332180 | orchestrator | Saturday 04 April 2026 01:04:26 +0000 (0:00:01.394) 0:00:37.978 ******** 2026-04-04 01:05:27.332186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332213 | orchestrator | 2026-04-04 01:05:27.332218 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-04 01:05:27.332224 | orchestrator | Saturday 04 April 2026 01:04:28 +0000 (0:00:02.403) 0:00:40.381 ******** 2026-04-04 01:05:27.332227 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:05:27.332230 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:05:27.332233 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:05:27.332236 | orchestrator | 2026-04-04 01:05:27.332239 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:05:27.332244 | orchestrator | Saturday 04 April 2026 01:04:29 +0000 (0:00:00.421) 0:00:40.803 ******** 2026-04-04 01:05:27.332248 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:05:27.332251 | orchestrator | 2026-04-04 01:05:27.332254 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-04 01:05:27.332257 | orchestrator | Saturday 04 April 2026 01:04:29 +0000 (0:00:00.565) 0:00:41.368 ******** 2026-04-04 01:05:27.332261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332293 | orchestrator | 2026-04-04 01:05:27.332296 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-04 01:05:27.332299 | orchestrator | Saturday 04 April 2026 01:04:32 +0000 (0:00:02.465) 0:00:43.834 ******** 2026-04-04 01:05:27.332302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332309 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332326 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:27.332330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332337 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:27.332342 | orchestrator | 2026-04-04 01:05:27.332349 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-04 01:05:27.332356 | orchestrator | Saturday 04 April 2026 01:04:33 +0000 (0:00:01.627) 0:00:45.462 ******** 2026-04-04 01:05:27.332361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332387 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:27.332393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332399 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332411 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:27.332414 | orchestrator | 2026-04-04 01:05:27.332417 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-04 01:05:27.332420 | orchestrator | Saturday 04 April 2026 01:04:34 +0000 (0:00:00.851) 0:00:46.313 ******** 2026-04-04 01:05:27.332428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332454 | orchestrator | 2026-04-04 01:05:27.332457 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-04 01:05:27.332461 | orchestrator | Saturday 04 April 2026 01:04:36 +0000 (0:00:01.968) 0:00:48.282 ******** 2026-04-04 01:05:27.332464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332491 | orchestrator | 2026-04-04 01:05:27.332494 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-04 01:05:27.332497 | orchestrator | Saturday 04 April 2026 01:04:41 +0000 (0:00:04.791) 0:00:53.073 ******** 2026-04-04 01:05:27.332500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332513 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332536 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:27.332607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-04 01:05:27.332614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:05:27.332624 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:27.332627 | orchestrator | 2026-04-04 01:05:27.332631 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-04 01:05:27.332634 | orchestrator | Saturday 04 April 2026 01:04:42 +0000 (0:00:00.598) 0:00:53.671 ******** 2026-04-04 01:05:27.332638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-04 01:05:27.332655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:05:27.332668 | orchestrator | 2026-04-04 01:05:27.332671 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-04 01:05:27.332675 | orchestrator | Saturday 04 April 2026 01:04:43 +0000 (0:00:01.704) 0:00:55.376 ******** 2026-04-04 01:05:27.332678 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:05:27.332681 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:05:27.332684 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:05:27.332687 | orchestrator | 2026-04-04 01:05:27.332691 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-04 01:05:27.332694 | orchestrator | Saturday 04 April 2026 01:04:44 +0000 (0:00:00.263) 0:00:55.639 ******** 2026-04-04 01:05:27.332697 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332700 | orchestrator | 2026-04-04 01:05:27.332706 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-04 01:05:27.332709 | orchestrator | Saturday 04 April 2026 01:04:45 +0000 (0:00:01.728) 0:00:57.368 ******** 2026-04-04 01:05:27.332712 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332715 | orchestrator | 2026-04-04 01:05:27.332719 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-04 01:05:27.332722 | orchestrator | Saturday 04 April 2026 01:04:47 +0000 (0:00:02.153) 0:00:59.522 ******** 2026-04-04 01:05:27.332727 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332731 | orchestrator | 2026-04-04 01:05:27.332734 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:05:27.332737 | orchestrator | Saturday 04 April 2026 01:05:03 +0000 (0:00:15.296) 0:01:14.818 ******** 2026-04-04 01:05:27.332740 | orchestrator | 2026-04-04 01:05:27.332743 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:05:27.332747 | orchestrator | Saturday 04 April 2026 01:05:03 +0000 (0:00:00.240) 0:01:15.058 ******** 2026-04-04 01:05:27.332750 | orchestrator | 2026-04-04 01:05:27.332753 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-04 01:05:27.332756 | orchestrator | Saturday 04 April 2026 01:05:03 +0000 (0:00:00.072) 0:01:15.131 ******** 2026-04-04 01:05:27.332759 | orchestrator | 2026-04-04 01:05:27.332763 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-04 01:05:27.332766 | orchestrator | Saturday 04 April 2026 01:05:03 +0000 (0:00:00.061) 0:01:15.193 ******** 2026-04-04 01:05:27.332772 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332776 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:27.332779 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:27.332782 | orchestrator | 2026-04-04 01:05:27.332785 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-04 01:05:27.332788 | orchestrator | Saturday 04 April 2026 01:05:19 +0000 (0:00:15.591) 0:01:30.785 ******** 2026-04-04 01:05:27.332792 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:05:27.332795 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:05:27.332798 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:05:27.332801 | orchestrator | 2026-04-04 01:05:27.332804 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:05:27.332810 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-04 01:05:27.332818 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:05:27.332825 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:05:27.332830 | orchestrator | 2026-04-04 01:05:27.332835 | orchestrator | 2026-04-04 01:05:27.332840 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:05:27.332845 | orchestrator | Saturday 04 April 2026 01:05:27 +0000 (0:00:07.913) 0:01:38.698 ******** 2026-04-04 01:05:27.332851 | orchestrator | =============================================================================== 2026-04-04 01:05:27.332856 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.59s 2026-04-04 01:05:27.332862 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.30s 2026-04-04 01:05:27.332892 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 7.91s 2026-04-04 01:05:27.332895 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.89s 2026-04-04 01:05:27.332899 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.79s 2026-04-04 01:05:27.332902 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.85s 2026-04-04 01:05:27.332905 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.49s 2026-04-04 01:05:27.332908 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.43s 2026-04-04 01:05:27.332911 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.37s 2026-04-04 01:05:27.332914 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.34s 2026-04-04 01:05:27.332917 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.02s 2026-04-04 01:05:27.332920 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.01s 2026-04-04 01:05:27.332923 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.88s 2026-04-04 01:05:27.332926 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.47s 2026-04-04 01:05:27.332930 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2026-04-04 01:05:27.332933 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.23s 2026-04-04 01:05:27.332936 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.15s 2026-04-04 01:05:27.332939 | orchestrator | magnum : Copying over config.json files for services -------------------- 1.97s 2026-04-04 01:05:27.332942 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.73s 2026-04-04 01:05:27.332945 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.70s 2026-04-04 01:05:27.332949 | orchestrator | 2026-04-04 01:05:27 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:27.334289 | orchestrator | 2026-04-04 01:05:27 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:27.334331 | orchestrator | 2026-04-04 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:30.378845 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:30.379428 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:30.381401 | orchestrator | 2026-04-04 01:05:30 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:30.381442 | orchestrator | 2026-04-04 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:33.418217 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:33.419942 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:33.421777 | orchestrator | 2026-04-04 01:05:33 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:33.421834 | orchestrator | 2026-04-04 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:36.467680 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:36.469971 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:36.471862 | orchestrator | 2026-04-04 01:05:36 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:36.471964 | orchestrator | 2026-04-04 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:39.508246 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:39.508307 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:39.508599 | orchestrator | 2026-04-04 01:05:39 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:39.508613 | orchestrator | 2026-04-04 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:42.533241 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:42.534699 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:42.535310 | orchestrator | 2026-04-04 01:05:42 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:42.535336 | orchestrator | 2026-04-04 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:45.558230 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:45.560446 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:45.561995 | orchestrator | 2026-04-04 01:05:45 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:45.562060 | orchestrator | 2026-04-04 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:48.587594 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:48.589401 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:48.590922 | orchestrator | 2026-04-04 01:05:48 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:48.590989 | orchestrator | 2026-04-04 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:51.638286 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:51.639599 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:51.641552 | orchestrator | 2026-04-04 01:05:51 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:51.641600 | orchestrator | 2026-04-04 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:54.691086 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:54.693126 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:54.695235 | orchestrator | 2026-04-04 01:05:54 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:54.695295 | orchestrator | 2026-04-04 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:05:57.740399 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:05:57.741936 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:05:57.743980 | orchestrator | 2026-04-04 01:05:57 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:05:57.744013 | orchestrator | 2026-04-04 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:00.791668 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:06:00.793428 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:00.795782 | orchestrator | 2026-04-04 01:06:00 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:00.795976 | orchestrator | 2026-04-04 01:06:00 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:03.843027 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:06:03.845652 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:03.847526 | orchestrator | 2026-04-04 01:06:03 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:03.847577 | orchestrator | 2026-04-04 01:06:03 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:06.891199 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:06:06.892712 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:06.894508 | orchestrator | 2026-04-04 01:06:06 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:06.895140 | orchestrator | 2026-04-04 01:06:06 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:09.951346 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state STARTED 2026-04-04 01:06:09.953941 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:09.956149 | orchestrator | 2026-04-04 01:06:09 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:09.956236 | orchestrator | 2026-04-04 01:06:09 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:13.004961 | orchestrator | 2026-04-04 01:06:13 | INFO  | Task be9e927d-872c-44fc-9ab9-4816fee72389 is in state SUCCESS 2026-04-04 01:06:13.008040 | orchestrator | 2026-04-04 01:06:13.008095 | orchestrator | 2026-04-04 01:06:13.008142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:06:13.008152 | orchestrator | 2026-04-04 01:06:13.008184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:06:13.008191 | orchestrator | Saturday 04 April 2026 01:04:13 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-04-04 01:06:13.008230 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:13.008237 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:06:13.008243 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:06:13.008250 | orchestrator | 2026-04-04 01:06:13.008256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:06:13.008262 | orchestrator | Saturday 04 April 2026 01:04:13 +0000 (0:00:00.248) 0:00:00.522 ******** 2026-04-04 01:06:13.008269 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-04 01:06:13.008275 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-04 01:06:13.008282 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-04 01:06:13.008288 | orchestrator | 2026-04-04 01:06:13.008294 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-04 01:06:13.008300 | orchestrator | 2026-04-04 01:06:13.008306 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-04 01:06:13.008313 | orchestrator | Saturday 04 April 2026 01:04:13 +0000 (0:00:00.267) 0:00:00.789 ******** 2026-04-04 01:06:13.008493 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:13.008504 | orchestrator | 2026-04-04 01:06:13.008510 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-04 01:06:13.008516 | orchestrator | Saturday 04 April 2026 01:04:14 +0000 (0:00:00.500) 0:00:01.290 ******** 2026-04-04 01:06:13.008564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008598 | orchestrator | 2026-04-04 01:06:13.008605 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-04 01:06:13.008612 | orchestrator | Saturday 04 April 2026 01:04:15 +0000 (0:00:01.466) 0:00:02.757 ******** 2026-04-04 01:06:13.008632 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-04 01:06:13.008639 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-04 01:06:13.008646 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:06:13.008651 | orchestrator | 2026-04-04 01:06:13.008657 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-04 01:06:13.008664 | orchestrator | Saturday 04 April 2026 01:04:16 +0000 (0:00:00.789) 0:00:03.546 ******** 2026-04-04 01:06:13.008868 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:13.008877 | orchestrator | 2026-04-04 01:06:13.008881 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-04 01:06:13.008886 | orchestrator | Saturday 04 April 2026 01:04:17 +0000 (0:00:00.460) 0:00:04.007 ******** 2026-04-04 01:06:13.008907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.008925 | orchestrator | 2026-04-04 01:06:13.008929 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-04 01:06:13.008933 | orchestrator | Saturday 04 April 2026 01:04:18 +0000 (0:00:01.288) 0:00:05.295 ******** 2026-04-04 01:06:13.008937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.008947 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.008952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.008956 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.008971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.008976 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.008980 | orchestrator | 2026-04-04 01:06:13.008984 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-04 01:06:13.008988 | orchestrator | Saturday 04 April 2026 01:04:18 +0000 (0:00:00.338) 0:00:05.634 ******** 2026-04-04 01:06:13.008992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.008996 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.009002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.009006 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.009010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-04 01:06:13.009017 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.009021 | orchestrator | 2026-04-04 01:06:13.009025 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-04 01:06:13.009029 | orchestrator | Saturday 04 April 2026 01:04:19 +0000 (0:00:00.526) 0:00:06.160 ******** 2026-04-04 01:06:13.009033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009057 | orchestrator | 2026-04-04 01:06:13.009061 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-04 01:06:13.009065 | orchestrator | Saturday 04 April 2026 01:04:20 +0000 (0:00:01.260) 0:00:07.420 ******** 2026-04-04 01:06:13.009069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.009091 | orchestrator | 2026-04-04 01:06:13.009095 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-04 01:06:13.009099 | orchestrator | Saturday 04 April 2026 01:04:21 +0000 (0:00:01.123) 0:00:08.543 ******** 2026-04-04 01:06:13.009103 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.009107 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.009111 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.009115 | orchestrator | 2026-04-04 01:06:13.009119 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-04 01:06:13.009123 | orchestrator | Saturday 04 April 2026 01:04:22 +0000 (0:00:00.439) 0:00:08.982 ******** 2026-04-04 01:06:13.009127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:06:13.009131 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:06:13.009135 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-04 01:06:13.009139 | orchestrator | 2026-04-04 01:06:13.009143 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-04 01:06:13.009149 | orchestrator | Saturday 04 April 2026 01:04:23 +0000 (0:00:01.506) 0:00:10.489 ******** 2026-04-04 01:06:13.009156 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:06:13.009165 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:06:13.009172 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-04 01:06:13.009178 | orchestrator | 2026-04-04 01:06:13.009184 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-04 01:06:13.009190 | orchestrator | Saturday 04 April 2026 01:04:25 +0000 (0:00:01.493) 0:00:11.982 ******** 2026-04-04 01:06:13.009215 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:06:13.009222 | orchestrator | 2026-04-04 01:06:13.009228 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-04 01:06:13.009235 | orchestrator | Saturday 04 April 2026 01:04:26 +0000 (0:00:01.162) 0:00:13.144 ******** 2026-04-04 01:06:13.009241 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-04 01:06:13.009248 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-04 01:06:13.009254 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:13.009261 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:06:13.009267 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:06:13.009274 | orchestrator | 2026-04-04 01:06:13.009280 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-04 01:06:13.009287 | orchestrator | Saturday 04 April 2026 01:04:27 +0000 (0:00:00.940) 0:00:14.085 ******** 2026-04-04 01:06:13.009293 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.009300 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.009309 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.009313 | orchestrator | 2026-04-04 01:06:13.009316 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-04 01:06:13.009320 | orchestrator | Saturday 04 April 2026 01:04:27 +0000 (0:00:00.312) 0:00:14.397 ******** 2026-04-04 01:06:13.009324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085889, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085889, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1085889, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085903, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7947433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085903, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7947433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1085903, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7947433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085934, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.806818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085934, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.806818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1085934, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.806818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085901, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7936268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085901, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7936268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1085901, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7936268, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085936, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8077435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085936, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8077435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1085936, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8077435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085895, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7917905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085895, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7917905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1085895, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7917905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085910, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085910, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1085910, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085922, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8037436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085922, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8037436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1085922, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8037436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085887, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7875361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085887, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7875361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1085887, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7875361, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085894, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085894, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1085894, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7907434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085902, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7937434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085902, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7937434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1085902, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7937434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085913, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8007436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085913, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8007436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1085913, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8007436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085930, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8062124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085930, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8062124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1085930, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8062124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085899, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7929523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085899, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7929523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1085899, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7929523, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085918, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8031366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085918, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8031366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1085918, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8031366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085939, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8086116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085939, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8086116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1085939, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8086116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085912, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085912, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1085912, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7997434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085909, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7977436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085909, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7977436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1085909, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7977436, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085908, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7964776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085908, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7964776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1085908, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7964776, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085914, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8017435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085914, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8017435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1085914, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8017435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085905, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7957435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085905, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7957435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1085905, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.7957435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085925, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8047533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085925, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8047533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1085925, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8047533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085898, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.792148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085898, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.792148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1085898, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.792148, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086070, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8517442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086070, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8517442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1086070, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8517442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085997, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8288894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085997, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8288894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1085997, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8288894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085988, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8223321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085988, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8223321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1085988, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8223321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1086009, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8308218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1086009, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8308218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1086009, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8308218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085943, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8092306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085943, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8092306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1085943, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8092306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086033, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8402452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086033, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8402452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1086033, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8402452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086013, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8368018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086013, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8368018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1086013, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8368018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1086038, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.844744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.009996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1086038, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.844744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1086038, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.844744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086064, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8507245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086064, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8507245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1086064, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8507245, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1086030, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.838706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1086030, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.838706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1086030, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.838706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086007, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086007, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1086007, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085994, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8237438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085994, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8237438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1085994, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8237438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086005, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086005, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1086005, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8299725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085992, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8227437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085992, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8227437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1085992, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8227437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1086008, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8305774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1086008, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8305774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1086008, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8305774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086061, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.849744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086061, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.849744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1086061, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.849744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086057, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.847744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086057, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.847744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1086057, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.847744, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085981, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8208969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085981, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8208969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1085981, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8208969, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085984, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8216574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085984, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8216574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1085984, 'dev': 110, 'nlink': 1, 'atime': 1775260950.0, 'mtime': 1775260950.0, 'ctime': 1775261758.8216574, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086027, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8379228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086027, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8379228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1086027, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8379228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1086053, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8462222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1086053, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8462222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1086053, 'dev': 110, 'nlink': 1, 'atime': 1775260951.0, 'mtime': 1775260951.0, 'ctime': 1775261758.8462222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-04 01:06:13.010435 | orchestrator | 2026-04-04 01:06:13.010447 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-04 01:06:13.010454 | orchestrator | Saturday 04 April 2026 01:05:04 +0000 (0:00:36.693) 0:00:51.091 ******** 2026-04-04 01:06:13.010461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.010472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.010479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-04 01:06:13.010486 | orchestrator | 2026-04-04 01:06:13.010539 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-04 01:06:13.010547 | orchestrator | Saturday 04 April 2026 01:05:05 +0000 (0:00:01.222) 0:00:52.313 ******** 2026-04-04 01:06:13.010553 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:13.010560 | orchestrator | 2026-04-04 01:06:13.010566 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-04 01:06:13.010573 | orchestrator | Saturday 04 April 2026 01:05:07 +0000 (0:00:02.083) 0:00:54.397 ******** 2026-04-04 01:06:13.010579 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:13.010586 | orchestrator | 2026-04-04 01:06:13.010592 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:06:13.010599 | orchestrator | Saturday 04 April 2026 01:05:09 +0000 (0:00:02.301) 0:00:56.698 ******** 2026-04-04 01:06:13.010606 | orchestrator | 2026-04-04 01:06:13.010612 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:06:13.010618 | orchestrator | Saturday 04 April 2026 01:05:09 +0000 (0:00:00.060) 0:00:56.759 ******** 2026-04-04 01:06:13.010624 | orchestrator | 2026-04-04 01:06:13.010630 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-04 01:06:13.010637 | orchestrator | Saturday 04 April 2026 01:05:09 +0000 (0:00:00.064) 0:00:56.824 ******** 2026-04-04 01:06:13.010643 | orchestrator | 2026-04-04 01:06:13.010649 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-04 01:06:13.010655 | orchestrator | Saturday 04 April 2026 01:05:09 +0000 (0:00:00.074) 0:00:56.899 ******** 2026-04-04 01:06:13.010661 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.010671 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.010678 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:13.010689 | orchestrator | 2026-04-04 01:06:13.010696 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-04 01:06:13.010702 | orchestrator | Saturday 04 April 2026 01:05:11 +0000 (0:00:01.768) 0:00:58.668 ******** 2026-04-04 01:06:13.010708 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.010715 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.010721 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-04 01:06:13.010727 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-04 01:06:13.010734 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:13.010740 | orchestrator | 2026-04-04 01:06:13.010746 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-04 01:06:13.010753 | orchestrator | Saturday 04 April 2026 01:05:37 +0000 (0:00:26.017) 0:01:24.685 ******** 2026-04-04 01:06:13.010759 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.010765 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:13.010771 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:13.010777 | orchestrator | 2026-04-04 01:06:13.010784 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-04 01:06:13.010790 | orchestrator | Saturday 04 April 2026 01:06:06 +0000 (0:00:28.729) 0:01:53.414 ******** 2026-04-04 01:06:13.010811 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:13.010817 | orchestrator | 2026-04-04 01:06:13.010824 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-04 01:06:13.010830 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:02.629) 0:01:56.044 ******** 2026-04-04 01:06:13.010836 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.010842 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:13.010848 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:13.010855 | orchestrator | 2026-04-04 01:06:13.010861 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-04 01:06:13.010867 | orchestrator | Saturday 04 April 2026 01:06:09 +0000 (0:00:00.284) 0:01:56.329 ******** 2026-04-04 01:06:13.010878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-04 01:06:13.010885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-04 01:06:13.010892 | orchestrator | 2026-04-04 01:06:13.010898 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-04 01:06:13.010904 | orchestrator | Saturday 04 April 2026 01:06:11 +0000 (0:00:02.088) 0:01:58.418 ******** 2026-04-04 01:06:13.010910 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:13.010917 | orchestrator | 2026-04-04 01:06:13.010923 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:06:13.010929 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:06:13.010937 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:06:13.010943 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:06:13.010949 | orchestrator | 2026-04-04 01:06:13.010955 | orchestrator | 2026-04-04 01:06:13.010961 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:06:13.010971 | orchestrator | Saturday 04 April 2026 01:06:11 +0000 (0:00:00.255) 0:01:58.673 ******** 2026-04-04 01:06:13.010977 | orchestrator | =============================================================================== 2026-04-04 01:06:13.010983 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.69s 2026-04-04 01:06:13.010988 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.73s 2026-04-04 01:06:13.010994 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.02s 2026-04-04 01:06:13.011000 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.63s 2026-04-04 01:06:13.011006 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.30s 2026-04-04 01:06:13.011012 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.09s 2026-04-04 01:06:13.011018 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.08s 2026-04-04 01:06:13.011024 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2026-04-04 01:06:13.011029 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.51s 2026-04-04 01:06:13.011035 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.49s 2026-04-04 01:06:13.011041 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.47s 2026-04-04 01:06:13.011047 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-04-04 01:06:13.011056 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-04-04 01:06:13.011062 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.22s 2026-04-04 01:06:13.011068 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 1.16s 2026-04-04 01:06:13.011074 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.12s 2026-04-04 01:06:13.011080 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.94s 2026-04-04 01:06:13.011086 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.79s 2026-04-04 01:06:13.011091 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.53s 2026-04-04 01:06:13.011097 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.50s 2026-04-04 01:06:13.011103 | orchestrator | 2026-04-04 01:06:13 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:13.011109 | orchestrator | 2026-04-04 01:06:13 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:13.011115 | orchestrator | 2026-04-04 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:16.055244 | orchestrator | 2026-04-04 01:06:16 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:16.057225 | orchestrator | 2026-04-04 01:06:16 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:16.057522 | orchestrator | 2026-04-04 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:19.106976 | orchestrator | 2026-04-04 01:06:19 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:19.107025 | orchestrator | 2026-04-04 01:06:19 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:19.107032 | orchestrator | 2026-04-04 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:22.146288 | orchestrator | 2026-04-04 01:06:22 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:22.148426 | orchestrator | 2026-04-04 01:06:22 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state STARTED 2026-04-04 01:06:22.148502 | orchestrator | 2026-04-04 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:25.183260 | orchestrator | 2026-04-04 01:06:25 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:25.186615 | orchestrator | 2026-04-04 01:06:25 | INFO  | Task 1006af2a-b80f-4e18-a741-c417002cf151 is in state SUCCESS 2026-04-04 01:06:25.188470 | orchestrator | 2026-04-04 01:06:25.188519 | orchestrator | 2026-04-04 01:06:25.188543 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:06:25.188558 | orchestrator | 2026-04-04 01:06:25.188565 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-04 01:06:25.188635 | orchestrator | Saturday 04 April 2026 00:57:52 +0000 (0:00:00.451) 0:00:00.451 ******** 2026-04-04 01:06:25.188645 | orchestrator | changed: [testbed-manager] 2026-04-04 01:06:25.188653 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.188659 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.188724 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.188732 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.188739 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.188745 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.188751 | orchestrator | 2026-04-04 01:06:25.188770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:06:25.188789 | orchestrator | Saturday 04 April 2026 00:57:53 +0000 (0:00:01.087) 0:00:01.539 ******** 2026-04-04 01:06:25.188793 | orchestrator | changed: [testbed-manager] 2026-04-04 01:06:25.188797 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.188801 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.188804 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.188808 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.188812 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.188816 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.188820 | orchestrator | 2026-04-04 01:06:25.188824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:06:25.188828 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.573) 0:00:02.112 ******** 2026-04-04 01:06:25.188831 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-04 01:06:25.188835 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-04 01:06:25.188839 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-04 01:06:25.188843 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-04 01:06:25.188847 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-04 01:06:25.188851 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-04 01:06:25.188854 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-04 01:06:25.188858 | orchestrator | 2026-04-04 01:06:25.188862 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-04 01:06:25.188866 | orchestrator | 2026-04-04 01:06:25.188870 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-04 01:06:25.188873 | orchestrator | Saturday 04 April 2026 00:57:54 +0000 (0:00:00.613) 0:00:02.726 ******** 2026-04-04 01:06:25.188877 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.188881 | orchestrator | 2026-04-04 01:06:25.188885 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-04 01:06:25.188889 | orchestrator | Saturday 04 April 2026 00:57:55 +0000 (0:00:00.563) 0:00:03.289 ******** 2026-04-04 01:06:25.188893 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-04 01:06:25.188897 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-04 01:06:25.188901 | orchestrator | 2026-04-04 01:06:25.188905 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-04 01:06:25.188909 | orchestrator | Saturday 04 April 2026 00:58:00 +0000 (0:00:04.937) 0:00:08.226 ******** 2026-04-04 01:06:25.188912 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:06:25.189117 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-04 01:06:25.189131 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189138 | orchestrator | 2026-04-04 01:06:25.189145 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-04 01:06:25.189153 | orchestrator | Saturday 04 April 2026 00:58:05 +0000 (0:00:04.825) 0:00:13.052 ******** 2026-04-04 01:06:25.189157 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189160 | orchestrator | 2026-04-04 01:06:25.189164 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-04 01:06:25.189168 | orchestrator | Saturday 04 April 2026 00:58:05 +0000 (0:00:00.627) 0:00:13.679 ******** 2026-04-04 01:06:25.189172 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189175 | orchestrator | 2026-04-04 01:06:25.189179 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-04 01:06:25.189183 | orchestrator | Saturday 04 April 2026 00:58:07 +0000 (0:00:01.777) 0:00:15.456 ******** 2026-04-04 01:06:25.189187 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189191 | orchestrator | 2026-04-04 01:06:25.189195 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:06:25.189198 | orchestrator | Saturday 04 April 2026 00:58:10 +0000 (0:00:02.828) 0:00:18.285 ******** 2026-04-04 01:06:25.189202 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189206 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189209 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189242 | orchestrator | 2026-04-04 01:06:25.189247 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-04 01:06:25.189251 | orchestrator | Saturday 04 April 2026 00:58:11 +0000 (0:00:01.084) 0:00:19.369 ******** 2026-04-04 01:06:25.189255 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.189259 | orchestrator | 2026-04-04 01:06:25.189263 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-04 01:06:25.189267 | orchestrator | Saturday 04 April 2026 00:58:46 +0000 (0:00:34.888) 0:00:54.258 ******** 2026-04-04 01:06:25.189271 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189274 | orchestrator | 2026-04-04 01:06:25.189278 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:06:25.189282 | orchestrator | Saturday 04 April 2026 00:59:02 +0000 (0:00:15.936) 0:01:10.194 ******** 2026-04-04 01:06:25.189286 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.189290 | orchestrator | 2026-04-04 01:06:25.189294 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:06:25.189298 | orchestrator | Saturday 04 April 2026 00:59:15 +0000 (0:00:13.650) 0:01:23.845 ******** 2026-04-04 01:06:25.189310 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.189314 | orchestrator | 2026-04-04 01:06:25.189318 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-04 01:06:25.189322 | orchestrator | Saturday 04 April 2026 00:59:16 +0000 (0:00:00.589) 0:01:24.434 ******** 2026-04-04 01:06:25.189325 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189329 | orchestrator | 2026-04-04 01:06:25.189333 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:06:25.189337 | orchestrator | Saturday 04 April 2026 00:59:16 +0000 (0:00:00.398) 0:01:24.833 ******** 2026-04-04 01:06:25.189341 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.189345 | orchestrator | 2026-04-04 01:06:25.189349 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-04 01:06:25.189353 | orchestrator | Saturday 04 April 2026 00:59:17 +0000 (0:00:00.525) 0:01:25.358 ******** 2026-04-04 01:06:25.189464 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.189536 | orchestrator | 2026-04-04 01:06:25.189542 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-04 01:06:25.189546 | orchestrator | Saturday 04 April 2026 00:59:38 +0000 (0:00:21.438) 0:01:46.797 ******** 2026-04-04 01:06:25.189550 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189560 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189590 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189709 | orchestrator | 2026-04-04 01:06:25.189717 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-04 01:06:25.189723 | orchestrator | 2026-04-04 01:06:25.189730 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-04 01:06:25.189736 | orchestrator | Saturday 04 April 2026 00:59:39 +0000 (0:00:00.462) 0:01:47.259 ******** 2026-04-04 01:06:25.189742 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.189748 | orchestrator | 2026-04-04 01:06:25.189755 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-04 01:06:25.189761 | orchestrator | Saturday 04 April 2026 00:59:40 +0000 (0:00:01.138) 0:01:48.397 ******** 2026-04-04 01:06:25.189802 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189808 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189812 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189816 | orchestrator | 2026-04-04 01:06:25.189819 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-04 01:06:25.189823 | orchestrator | Saturday 04 April 2026 00:59:42 +0000 (0:00:02.277) 0:01:50.675 ******** 2026-04-04 01:06:25.189827 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189831 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189835 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189839 | orchestrator | 2026-04-04 01:06:25.189843 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-04 01:06:25.189847 | orchestrator | Saturday 04 April 2026 00:59:44 +0000 (0:00:02.199) 0:01:52.875 ******** 2026-04-04 01:06:25.189850 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189854 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189858 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189862 | orchestrator | 2026-04-04 01:06:25.189866 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-04 01:06:25.189870 | orchestrator | Saturday 04 April 2026 00:59:45 +0000 (0:00:00.467) 0:01:53.342 ******** 2026-04-04 01:06:25.189873 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 01:06:25.189877 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189881 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 01:06:25.189885 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189888 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-04 01:06:25.189892 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-04 01:06:25.189896 | orchestrator | 2026-04-04 01:06:25.189900 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-04 01:06:25.189904 | orchestrator | Saturday 04 April 2026 00:59:54 +0000 (0:00:08.671) 0:02:02.014 ******** 2026-04-04 01:06:25.189908 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189911 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189915 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189919 | orchestrator | 2026-04-04 01:06:25.189923 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-04 01:06:25.189927 | orchestrator | Saturday 04 April 2026 00:59:54 +0000 (0:00:00.262) 0:02:02.276 ******** 2026-04-04 01:06:25.189930 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-04 01:06:25.189934 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.189938 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-04 01:06:25.189942 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189945 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-04 01:06:25.189949 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189953 | orchestrator | 2026-04-04 01:06:25.189956 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-04 01:06:25.189960 | orchestrator | Saturday 04 April 2026 00:59:55 +0000 (0:00:00.867) 0:02:03.144 ******** 2026-04-04 01:06:25.189970 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189974 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.189978 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.189981 | orchestrator | 2026-04-04 01:06:25.189985 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-04 01:06:25.189989 | orchestrator | Saturday 04 April 2026 00:59:55 +0000 (0:00:00.487) 0:02:03.632 ******** 2026-04-04 01:06:25.189993 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.189997 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190001 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.190244 | orchestrator | 2026-04-04 01:06:25.190259 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-04 01:06:25.190264 | orchestrator | Saturday 04 April 2026 00:59:56 +0000 (0:00:00.887) 0:02:04.519 ******** 2026-04-04 01:06:25.190268 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190271 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190308 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.190313 | orchestrator | 2026-04-04 01:06:25.190317 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-04 01:06:25.190325 | orchestrator | Saturday 04 April 2026 00:59:58 +0000 (0:00:02.207) 0:02:06.726 ******** 2026-04-04 01:06:25.190329 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190333 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190337 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.190340 | orchestrator | 2026-04-04 01:06:25.190344 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:06:25.190348 | orchestrator | Saturday 04 April 2026 01:00:20 +0000 (0:00:21.896) 0:02:28.622 ******** 2026-04-04 01:06:25.190352 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190356 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190359 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.190363 | orchestrator | 2026-04-04 01:06:25.190367 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:06:25.190371 | orchestrator | Saturday 04 April 2026 01:00:35 +0000 (0:00:14.800) 0:02:43.423 ******** 2026-04-04 01:06:25.190374 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.190378 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190382 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190386 | orchestrator | 2026-04-04 01:06:25.190389 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-04 01:06:25.190393 | orchestrator | Saturday 04 April 2026 01:00:36 +0000 (0:00:00.816) 0:02:44.239 ******** 2026-04-04 01:06:25.190397 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190400 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190404 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.190408 | orchestrator | 2026-04-04 01:06:25.190412 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-04 01:06:25.190415 | orchestrator | Saturday 04 April 2026 01:00:50 +0000 (0:00:14.281) 0:02:58.521 ******** 2026-04-04 01:06:25.190419 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190423 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190426 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190430 | orchestrator | 2026-04-04 01:06:25.190434 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-04 01:06:25.190438 | orchestrator | Saturday 04 April 2026 01:00:51 +0000 (0:00:01.166) 0:02:59.687 ******** 2026-04-04 01:06:25.190441 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190445 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190449 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190453 | orchestrator | 2026-04-04 01:06:25.190457 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-04 01:06:25.190461 | orchestrator | 2026-04-04 01:06:25.190464 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:06:25.190473 | orchestrator | Saturday 04 April 2026 01:00:52 +0000 (0:00:00.351) 0:03:00.038 ******** 2026-04-04 01:06:25.190477 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.190481 | orchestrator | 2026-04-04 01:06:25.190485 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-04 01:06:25.190489 | orchestrator | Saturday 04 April 2026 01:00:53 +0000 (0:00:01.077) 0:03:01.115 ******** 2026-04-04 01:06:25.190493 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-04 01:06:25.190496 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-04 01:06:25.190500 | orchestrator | 2026-04-04 01:06:25.190504 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-04 01:06:25.190508 | orchestrator | Saturday 04 April 2026 01:00:56 +0000 (0:00:03.641) 0:03:04.757 ******** 2026-04-04 01:06:25.190512 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-04 01:06:25.190517 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-04 01:06:25.190521 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-04 01:06:25.190524 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-04 01:06:25.190528 | orchestrator | 2026-04-04 01:06:25.190532 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-04 01:06:25.190536 | orchestrator | Saturday 04 April 2026 01:01:03 +0000 (0:00:07.149) 0:03:11.906 ******** 2026-04-04 01:06:25.190540 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:06:25.190543 | orchestrator | 2026-04-04 01:06:25.190547 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-04 01:06:25.190551 | orchestrator | Saturday 04 April 2026 01:01:07 +0000 (0:00:03.856) 0:03:15.763 ******** 2026-04-04 01:06:25.190555 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-04 01:06:25.190559 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:06:25.190562 | orchestrator | 2026-04-04 01:06:25.190566 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-04 01:06:25.190570 | orchestrator | Saturday 04 April 2026 01:01:12 +0000 (0:00:04.612) 0:03:20.375 ******** 2026-04-04 01:06:25.190574 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:06:25.190577 | orchestrator | 2026-04-04 01:06:25.190581 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-04 01:06:25.190595 | orchestrator | Saturday 04 April 2026 01:01:16 +0000 (0:00:03.949) 0:03:24.325 ******** 2026-04-04 01:06:25.190600 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-04 01:06:25.190604 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-04 01:06:25.190607 | orchestrator | 2026-04-04 01:06:25.190611 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-04 01:06:25.190647 | orchestrator | Saturday 04 April 2026 01:01:24 +0000 (0:00:07.774) 0:03:32.099 ******** 2026-04-04 01:06:25.190656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190705 | orchestrator | 2026-04-04 01:06:25.190709 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-04 01:06:25.190713 | orchestrator | Saturday 04 April 2026 01:01:27 +0000 (0:00:03.492) 0:03:35.591 ******** 2026-04-04 01:06:25.190716 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190720 | orchestrator | 2026-04-04 01:06:25.190724 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-04 01:06:25.190728 | orchestrator | Saturday 04 April 2026 01:01:27 +0000 (0:00:00.167) 0:03:35.759 ******** 2026-04-04 01:06:25.190731 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190735 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190739 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190743 | orchestrator | 2026-04-04 01:06:25.190747 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-04 01:06:25.190750 | orchestrator | Saturday 04 April 2026 01:01:28 +0000 (0:00:00.480) 0:03:36.239 ******** 2026-04-04 01:06:25.190754 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-04 01:06:25.190758 | orchestrator | 2026-04-04 01:06:25.190762 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-04 01:06:25.190766 | orchestrator | Saturday 04 April 2026 01:01:29 +0000 (0:00:01.171) 0:03:37.410 ******** 2026-04-04 01:06:25.190769 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190788 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190795 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190801 | orchestrator | 2026-04-04 01:06:25.190807 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-04 01:06:25.190813 | orchestrator | Saturday 04 April 2026 01:01:30 +0000 (0:00:00.737) 0:03:38.148 ******** 2026-04-04 01:06:25.190817 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.190821 | orchestrator | 2026-04-04 01:06:25.190825 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-04 01:06:25.190829 | orchestrator | Saturday 04 April 2026 01:01:31 +0000 (0:00:01.076) 0:03:39.225 ******** 2026-04-04 01:06:25.190833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.190866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.190893 | orchestrator | 2026-04-04 01:06:25.190897 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-04 01:06:25.190901 | orchestrator | Saturday 04 April 2026 01:01:33 +0000 (0:00:02.275) 0:03:41.500 ******** 2026-04-04 01:06:25.190905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.190909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.190913 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.190917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.190922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.190929 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.190945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.190950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.190954 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.190958 | orchestrator | 2026-04-04 01:06:25.190962 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-04 01:06:25.190966 | orchestrator | Saturday 04 April 2026 01:01:34 +0000 (0:00:00.725) 0:03:42.225 ******** 2026-04-04 01:06:25.190970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.190974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.190990 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.191029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.191036 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.191048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.191059 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191066 | orchestrator | 2026-04-04 01:06:25.191071 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-04 01:06:25.191076 | orchestrator | Saturday 04 April 2026 01:01:35 +0000 (0:00:01.442) 0:03:43.668 ******** 2026-04-04 01:06:25.191098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191158 | orchestrator | 2026-04-04 01:06:25.191164 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-04 01:06:25.191170 | orchestrator | Saturday 04 April 2026 01:01:38 +0000 (0:00:02.692) 0:03:46.360 ******** 2026-04-04 01:06:25.191177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191238 | orchestrator | 2026-04-04 01:06:25.191244 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-04 01:06:25.191250 | orchestrator | Saturday 04 April 2026 01:01:47 +0000 (0:00:08.599) 0:03:54.959 ******** 2026-04-04 01:06:25.191257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.191287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.191296 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.191312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.191320 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-04 01:06:25.191340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.191346 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191352 | orchestrator | 2026-04-04 01:06:25.191359 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-04 01:06:25.191365 | orchestrator | Saturday 04 April 2026 01:01:48 +0000 (0:00:01.386) 0:03:56.346 ******** 2026-04-04 01:06:25.191371 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.191377 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.191383 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.191389 | orchestrator | 2026-04-04 01:06:25.191417 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-04 01:06:25.191425 | orchestrator | Saturday 04 April 2026 01:01:51 +0000 (0:00:02.729) 0:03:59.075 ******** 2026-04-04 01:06:25.191432 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191439 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191446 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191452 | orchestrator | 2026-04-04 01:06:25.191459 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-04 01:06:25.191464 | orchestrator | Saturday 04 April 2026 01:01:51 +0000 (0:00:00.306) 0:03:59.381 ******** 2026-04-04 01:06:25.191469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-04 01:06:25.191503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.191521 | orchestrator | 2026-04-04 01:06:25.191526 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:06:25.191531 | orchestrator | Saturday 04 April 2026 01:01:53 +0000 (0:00:02.231) 0:04:01.612 ******** 2026-04-04 01:06:25.191535 | orchestrator | 2026-04-04 01:06:25.191540 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:06:25.191545 | orchestrator | Saturday 04 April 2026 01:01:54 +0000 (0:00:00.355) 0:04:01.968 ******** 2026-04-04 01:06:25.191548 | orchestrator | 2026-04-04 01:06:25.191552 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-04 01:06:25.191556 | orchestrator | Saturday 04 April 2026 01:01:54 +0000 (0:00:00.327) 0:04:02.295 ******** 2026-04-04 01:06:25.191560 | orchestrator | 2026-04-04 01:06:25.191566 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-04 01:06:25.191573 | orchestrator | Saturday 04 April 2026 01:01:54 +0000 (0:00:00.279) 0:04:02.575 ******** 2026-04-04 01:06:25.191579 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.191586 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.191593 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.191600 | orchestrator | 2026-04-04 01:06:25.191607 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-04 01:06:25.191614 | orchestrator | Saturday 04 April 2026 01:02:10 +0000 (0:00:15.396) 0:04:17.971 ******** 2026-04-04 01:06:25.191621 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.191628 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.191635 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.191642 | orchestrator | 2026-04-04 01:06:25.191649 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-04 01:06:25.191656 | orchestrator | 2026-04-04 01:06:25.191663 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:06:25.191670 | orchestrator | Saturday 04 April 2026 01:02:16 +0000 (0:00:06.439) 0:04:24.411 ******** 2026-04-04 01:06:25.191677 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.191684 | orchestrator | 2026-04-04 01:06:25.191690 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:06:25.191696 | orchestrator | Saturday 04 April 2026 01:02:17 +0000 (0:00:00.977) 0:04:25.389 ******** 2026-04-04 01:06:25.191702 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.191708 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.191715 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.191722 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191729 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191736 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191743 | orchestrator | 2026-04-04 01:06:25.191750 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-04 01:06:25.191757 | orchestrator | Saturday 04 April 2026 01:02:18 +0000 (0:00:00.565) 0:04:25.954 ******** 2026-04-04 01:06:25.191764 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191769 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191829 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191834 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:06:25.191838 | orchestrator | 2026-04-04 01:06:25.191842 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-04 01:06:25.191869 | orchestrator | Saturday 04 April 2026 01:02:18 +0000 (0:00:00.807) 0:04:26.762 ******** 2026-04-04 01:06:25.191873 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-04 01:06:25.191877 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-04 01:06:25.191881 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-04 01:06:25.191891 | orchestrator | 2026-04-04 01:06:25.191895 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-04 01:06:25.191899 | orchestrator | Saturday 04 April 2026 01:02:19 +0000 (0:00:00.919) 0:04:27.682 ******** 2026-04-04 01:06:25.191903 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-04 01:06:25.191907 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-04 01:06:25.191910 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-04 01:06:25.191914 | orchestrator | 2026-04-04 01:06:25.191918 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-04 01:06:25.191922 | orchestrator | Saturday 04 April 2026 01:02:20 +0000 (0:00:01.148) 0:04:28.831 ******** 2026-04-04 01:06:25.191925 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-04 01:06:25.191929 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.191933 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-04 01:06:25.191937 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.191940 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-04 01:06:25.191944 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.191948 | orchestrator | 2026-04-04 01:06:25.191951 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-04 01:06:25.191955 | orchestrator | Saturday 04 April 2026 01:02:21 +0000 (0:00:00.573) 0:04:29.404 ******** 2026-04-04 01:06:25.191959 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:06:25.191963 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:06:25.191967 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.191970 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:06:25.191974 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:06:25.191978 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.191982 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:06:25.191985 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-04 01:06:25.191989 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-04 01:06:25.191993 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.191997 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:06:25.192000 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-04 01:06:25.192004 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:06:25.192008 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:06:25.192012 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-04 01:06:25.192016 | orchestrator | 2026-04-04 01:06:25.192019 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-04 01:06:25.192023 | orchestrator | Saturday 04 April 2026 01:02:23 +0000 (0:00:01.952) 0:04:31.357 ******** 2026-04-04 01:06:25.192027 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.192031 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.192034 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.192038 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.192042 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.192046 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.192049 | orchestrator | 2026-04-04 01:06:25.192053 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-04 01:06:25.192057 | orchestrator | Saturday 04 April 2026 01:02:24 +0000 (0:00:01.136) 0:04:32.494 ******** 2026-04-04 01:06:25.192061 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.192064 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.192071 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.192075 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.192079 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.192082 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.192086 | orchestrator | 2026-04-04 01:06:25.192090 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-04 01:06:25.192094 | orchestrator | Saturday 04 April 2026 01:02:26 +0000 (0:00:01.969) 0:04:34.463 ******** 2026-04-04 01:06:25.192099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192215 | orchestrator | 2026-04-04 01:06:25.192219 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:06:25.192223 | orchestrator | Saturday 04 April 2026 01:02:28 +0000 (0:00:02.201) 0:04:36.665 ******** 2026-04-04 01:06:25.192227 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:06:25.192231 | orchestrator | 2026-04-04 01:06:25.192235 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-04 01:06:25.192239 | orchestrator | Saturday 04 April 2026 01:02:29 +0000 (0:00:01.052) 0:04:37.718 ******** 2026-04-04 01:06:25.192243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192251 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192281 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.192346 | orchestrator | 2026-04-04 01:06:25.192350 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-04 01:06:25.192354 | orchestrator | Saturday 04 April 2026 01:02:33 +0000 (0:00:04.023) 0:04:41.742 ******** 2026-04-04 01:06:25.192369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192385 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.192389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192414 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.192418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192432 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.192436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192446 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.192460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192469 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.192473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192483 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.192487 | orchestrator | 2026-04-04 01:06:25.192491 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-04 01:06:25.192495 | orchestrator | Saturday 04 April 2026 01:02:35 +0000 (0:00:01.401) 0:04:43.143 ******** 2026-04-04 01:06:25.192499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192525 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.192529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192547 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.192553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192557 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.192572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.192580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.192584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192588 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.192592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192600 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.192607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.192622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.192627 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.192633 | orchestrator | 2026-04-04 01:06:25.192637 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:06:25.192641 | orchestrator | Saturday 04 April 2026 01:02:37 +0000 (0:00:02.435) 0:04:45.579 ******** 2026-04-04 01:06:25.192647 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.192654 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.192661 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.192668 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:06:25.192674 | orchestrator | 2026-04-04 01:06:25.192681 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-04 01:06:25.192688 | orchestrator | Saturday 04 April 2026 01:02:38 +0000 (0:00:00.915) 0:04:46.495 ******** 2026-04-04 01:06:25.192695 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:06:25.192702 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:06:25.192709 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:06:25.192716 | orchestrator | 2026-04-04 01:06:25.192722 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-04 01:06:25.192729 | orchestrator | Saturday 04 April 2026 01:02:39 +0000 (0:00:01.010) 0:04:47.505 ******** 2026-04-04 01:06:25.192733 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:06:25.192737 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:06:25.192741 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:06:25.192745 | orchestrator | 2026-04-04 01:06:25.192748 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-04 01:06:25.192752 | orchestrator | Saturday 04 April 2026 01:02:40 +0000 (0:00:01.124) 0:04:48.630 ******** 2026-04-04 01:06:25.192756 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:06:25.192760 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:06:25.192764 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:06:25.192767 | orchestrator | 2026-04-04 01:06:25.192781 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-04 01:06:25.192786 | orchestrator | Saturday 04 April 2026 01:02:41 +0000 (0:00:00.461) 0:04:49.092 ******** 2026-04-04 01:06:25.192790 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:06:25.192794 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:06:25.192798 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:06:25.192802 | orchestrator | 2026-04-04 01:06:25.192805 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-04 01:06:25.192809 | orchestrator | Saturday 04 April 2026 01:02:41 +0000 (0:00:00.470) 0:04:49.562 ******** 2026-04-04 01:06:25.192813 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:06:25.192817 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:06:25.192821 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:06:25.192824 | orchestrator | 2026-04-04 01:06:25.192828 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-04 01:06:25.192832 | orchestrator | Saturday 04 April 2026 01:02:42 +0000 (0:00:01.101) 0:04:50.664 ******** 2026-04-04 01:06:25.192836 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:06:25.192840 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:06:25.192843 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:06:25.192847 | orchestrator | 2026-04-04 01:06:25.192851 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-04 01:06:25.192855 | orchestrator | Saturday 04 April 2026 01:02:44 +0000 (0:00:01.319) 0:04:51.984 ******** 2026-04-04 01:06:25.192858 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-04 01:06:25.192862 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-04 01:06:25.192866 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-04 01:06:25.192870 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-04 01:06:25.192879 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-04 01:06:25.192882 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-04 01:06:25.192886 | orchestrator | 2026-04-04 01:06:25.192890 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-04 01:06:25.192894 | orchestrator | Saturday 04 April 2026 01:02:47 +0000 (0:00:03.364) 0:04:55.348 ******** 2026-04-04 01:06:25.192897 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.192901 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.192905 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.192908 | orchestrator | 2026-04-04 01:06:25.192912 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-04 01:06:25.192916 | orchestrator | Saturday 04 April 2026 01:02:47 +0000 (0:00:00.274) 0:04:55.623 ******** 2026-04-04 01:06:25.192920 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.192923 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.192930 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.192934 | orchestrator | 2026-04-04 01:06:25.192938 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-04 01:06:25.192941 | orchestrator | Saturday 04 April 2026 01:02:47 +0000 (0:00:00.272) 0:04:55.896 ******** 2026-04-04 01:06:25.192945 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.192949 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.192952 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.192956 | orchestrator | 2026-04-04 01:06:25.192960 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-04 01:06:25.192964 | orchestrator | Saturday 04 April 2026 01:02:49 +0000 (0:00:01.184) 0:04:57.081 ******** 2026-04-04 01:06:25.192983 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-04 01:06:25.192988 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-04 01:06:25.192992 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-04 01:06:25.192996 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-04 01:06:25.193000 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-04 01:06:25.193004 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-04 01:06:25.193008 | orchestrator | 2026-04-04 01:06:25.193012 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-04 01:06:25.193015 | orchestrator | Saturday 04 April 2026 01:02:51 +0000 (0:00:02.805) 0:04:59.886 ******** 2026-04-04 01:06:25.193022 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 01:06:25.193028 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 01:06:25.193039 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 01:06:25.193046 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-04 01:06:25.193053 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.193059 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-04 01:06:25.193067 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.193074 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-04 01:06:25.193081 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.193088 | orchestrator | 2026-04-04 01:06:25.193095 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-04 01:06:25.193100 | orchestrator | Saturday 04 April 2026 01:02:55 +0000 (0:00:03.516) 0:05:03.402 ******** 2026-04-04 01:06:25.193104 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193108 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193116 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193120 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-04 01:06:25.193124 | orchestrator | 2026-04-04 01:06:25.193128 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-04 01:06:25.193131 | orchestrator | Saturday 04 April 2026 01:02:57 +0000 (0:00:01.628) 0:05:05.031 ******** 2026-04-04 01:06:25.193135 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:06:25.193139 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-04 01:06:25.193143 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-04 01:06:25.193147 | orchestrator | 2026-04-04 01:06:25.193150 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-04 01:06:25.193154 | orchestrator | Saturday 04 April 2026 01:02:58 +0000 (0:00:00.890) 0:05:05.921 ******** 2026-04-04 01:06:25.193158 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193162 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193166 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.193169 | orchestrator | 2026-04-04 01:06:25.193173 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-04 01:06:25.193177 | orchestrator | Saturday 04 April 2026 01:02:58 +0000 (0:00:00.300) 0:05:06.222 ******** 2026-04-04 01:06:25.193181 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193185 | orchestrator | 2026-04-04 01:06:25.193189 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-04 01:06:25.193192 | orchestrator | Saturday 04 April 2026 01:02:58 +0000 (0:00:00.194) 0:05:06.416 ******** 2026-04-04 01:06:25.193196 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193200 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193205 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.193212 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193217 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193223 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193229 | orchestrator | 2026-04-04 01:06:25.193235 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-04 01:06:25.193241 | orchestrator | Saturday 04 April 2026 01:02:59 +0000 (0:00:00.793) 0:05:07.210 ******** 2026-04-04 01:06:25.193248 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-04 01:06:25.193254 | orchestrator | 2026-04-04 01:06:25.193261 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-04 01:06:25.193267 | orchestrator | Saturday 04 April 2026 01:02:59 +0000 (0:00:00.683) 0:05:07.894 ******** 2026-04-04 01:06:25.193273 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193277 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193280 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.193284 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193288 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193291 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193295 | orchestrator | 2026-04-04 01:06:25.193302 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-04 01:06:25.193306 | orchestrator | Saturday 04 April 2026 01:03:00 +0000 (0:00:00.538) 0:05:08.432 ******** 2026-04-04 01:06:25.193314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193357 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193365 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193381 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193397 | orchestrator | 2026-04-04 01:06:25.193403 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-04 01:06:25.193412 | orchestrator | Saturday 04 April 2026 01:03:04 +0000 (0:00:04.158) 0:05:12.591 ******** 2026-04-04 01:06:25.193421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.193428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.193437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.193452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.193460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.193466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.193473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.193548 | orchestrator | 2026-04-04 01:06:25.193552 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-04 01:06:25.193556 | orchestrator | Saturday 04 April 2026 01:03:11 +0000 (0:00:06.667) 0:05:19.258 ******** 2026-04-04 01:06:25.193560 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193564 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193567 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193571 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193577 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.193581 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193585 | orchestrator | 2026-04-04 01:06:25.193589 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-04 01:06:25.193593 | orchestrator | Saturday 04 April 2026 01:03:12 +0000 (0:00:01.328) 0:05:20.587 ******** 2026-04-04 01:06:25.193597 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:06:25.193600 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:06:25.193604 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-04 01:06:25.193608 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:06:25.193612 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:06:25.193615 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:06:25.193619 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193623 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:06:25.193627 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193630 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-04 01:06:25.193634 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193638 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-04 01:06:25.193642 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:06:25.193648 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:06:25.193656 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-04 01:06:25.193665 | orchestrator | 2026-04-04 01:06:25.193671 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-04 01:06:25.193677 | orchestrator | Saturday 04 April 2026 01:03:16 +0000 (0:00:03.713) 0:05:24.300 ******** 2026-04-04 01:06:25.193684 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193690 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193697 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.193703 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193710 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193713 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193717 | orchestrator | 2026-04-04 01:06:25.193721 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-04 01:06:25.193725 | orchestrator | Saturday 04 April 2026 01:03:17 +0000 (0:00:00.627) 0:05:24.928 ******** 2026-04-04 01:06:25.193729 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:06:25.193733 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:06:25.193747 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:06:25.193755 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:06:25.193760 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-04 01:06:25.193766 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193785 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-04 01:06:25.193791 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193797 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193802 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193808 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193815 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193821 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193830 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-04 01:06:25.193833 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193837 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193841 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193845 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193849 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193856 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-04 01:06:25.193863 | orchestrator | 2026-04-04 01:06:25.193867 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-04 01:06:25.193871 | orchestrator | Saturday 04 April 2026 01:03:21 +0000 (0:00:04.856) 0:05:29.785 ******** 2026-04-04 01:06:25.193875 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:06:25.193879 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:06:25.193882 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-04 01:06:25.193886 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:06:25.193890 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:06:25.193894 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:06:25.193897 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-04 01:06:25.193901 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:06:25.193905 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-04 01:06:25.193909 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:06:25.193916 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:06:25.193920 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:06:25.193924 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-04 01:06:25.193928 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:06:25.193931 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:06:25.193935 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.193939 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:06:25.193943 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.193947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-04 01:06:25.193950 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:06:25.193954 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-04 01:06:25.193958 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.193961 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:06:25.193965 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:06:25.193969 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-04 01:06:25.193973 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:06:25.193976 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-04 01:06:25.193980 | orchestrator | 2026-04-04 01:06:25.193984 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-04 01:06:25.193988 | orchestrator | Saturday 04 April 2026 01:03:30 +0000 (0:00:08.445) 0:05:38.230 ******** 2026-04-04 01:06:25.193991 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.193995 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.193999 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.194003 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194006 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194010 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194034 | orchestrator | 2026-04-04 01:06:25.194038 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-04 01:06:25.194042 | orchestrator | Saturday 04 April 2026 01:03:30 +0000 (0:00:00.414) 0:05:38.645 ******** 2026-04-04 01:06:25.194046 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.194049 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.194053 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.194057 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194061 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194064 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194068 | orchestrator | 2026-04-04 01:06:25.194074 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-04 01:06:25.194078 | orchestrator | Saturday 04 April 2026 01:03:31 +0000 (0:00:00.617) 0:05:39.263 ******** 2026-04-04 01:06:25.194082 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194086 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194090 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194093 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194097 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194101 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194105 | orchestrator | 2026-04-04 01:06:25.194108 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-04 01:06:25.194112 | orchestrator | Saturday 04 April 2026 01:03:33 +0000 (0:00:02.070) 0:05:41.334 ******** 2026-04-04 01:06:25.194116 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194129 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194133 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194140 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194145 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194156 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194162 | orchestrator | 2026-04-04 01:06:25.194168 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-04 01:06:25.194174 | orchestrator | Saturday 04 April 2026 01:03:35 +0000 (0:00:02.038) 0:05:43.373 ******** 2026-04-04 01:06:25.194180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.194187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.194193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194198 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.194204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.194213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.194228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194234 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.194241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-04 01:06:25.194247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-04 01:06:25.194253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194259 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.194269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.194284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194291 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.194304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194310 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-04 01:06:25.194323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-04 01:06:25.194330 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194335 | orchestrator | 2026-04-04 01:06:25.194341 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-04 01:06:25.194347 | orchestrator | Saturday 04 April 2026 01:03:36 +0000 (0:00:01.364) 0:05:44.737 ******** 2026-04-04 01:06:25.194353 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-04 01:06:25.194359 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194365 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.194371 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-04 01:06:25.194382 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194388 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.194395 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-04 01:06:25.194401 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194407 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.194413 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-04 01:06:25.194420 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194426 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194432 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-04 01:06:25.194441 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194448 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194454 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-04 01:06:25.194460 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-04 01:06:25.194467 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194473 | orchestrator | 2026-04-04 01:06:25.194479 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-04 01:06:25.194486 | orchestrator | Saturday 04 April 2026 01:03:37 +0000 (0:00:00.688) 0:05:45.426 ******** 2026-04-04 01:06:25.194497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194504 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194546 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194571 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-04 01:06:25.194578 | orchestrator | 2026-04-04 01:06:25.194582 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-04 01:06:25.194586 | orchestrator | Saturday 04 April 2026 01:03:40 +0000 (0:00:02.603) 0:05:48.029 ******** 2026-04-04 01:06:25.194594 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.194598 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.194602 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.194606 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.194609 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.194613 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.194617 | orchestrator | 2026-04-04 01:06:25.194620 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194624 | orchestrator | Saturday 04 April 2026 01:03:40 +0000 (0:00:00.772) 0:05:48.801 ******** 2026-04-04 01:06:25.194628 | orchestrator | 2026-04-04 01:06:25.194634 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194643 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.130) 0:05:48.932 ******** 2026-04-04 01:06:25.194651 | orchestrator | 2026-04-04 01:06:25.194658 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194664 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.123) 0:05:49.055 ******** 2026-04-04 01:06:25.194670 | orchestrator | 2026-04-04 01:06:25.194676 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194683 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.128) 0:05:49.184 ******** 2026-04-04 01:06:25.194689 | orchestrator | 2026-04-04 01:06:25.194695 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194701 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.126) 0:05:49.311 ******** 2026-04-04 01:06:25.194708 | orchestrator | 2026-04-04 01:06:25.194714 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-04 01:06:25.194721 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.268) 0:05:49.579 ******** 2026-04-04 01:06:25.194728 | orchestrator | 2026-04-04 01:06:25.194733 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-04 01:06:25.194742 | orchestrator | Saturday 04 April 2026 01:03:41 +0000 (0:00:00.126) 0:05:49.706 ******** 2026-04-04 01:06:25.194749 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.194755 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.194761 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.194766 | orchestrator | 2026-04-04 01:06:25.194789 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-04 01:06:25.194796 | orchestrator | Saturday 04 April 2026 01:03:49 +0000 (0:00:07.920) 0:05:57.626 ******** 2026-04-04 01:06:25.194802 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.194808 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.194814 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.194819 | orchestrator | 2026-04-04 01:06:25.194825 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-04 01:06:25.194831 | orchestrator | Saturday 04 April 2026 01:04:01 +0000 (0:00:11.939) 0:06:09.566 ******** 2026-04-04 01:06:25.194836 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194842 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194848 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194853 | orchestrator | 2026-04-04 01:06:25.194864 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-04 01:06:25.194871 | orchestrator | Saturday 04 April 2026 01:04:21 +0000 (0:00:19.391) 0:06:28.957 ******** 2026-04-04 01:06:25.194877 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194883 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194890 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194896 | orchestrator | 2026-04-04 01:06:25.194902 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-04 01:06:25.194908 | orchestrator | Saturday 04 April 2026 01:04:51 +0000 (0:00:30.858) 0:06:59.816 ******** 2026-04-04 01:06:25.194915 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194921 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194934 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194940 | orchestrator | 2026-04-04 01:06:25.194947 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-04 01:06:25.194953 | orchestrator | Saturday 04 April 2026 01:04:52 +0000 (0:00:00.787) 0:07:00.604 ******** 2026-04-04 01:06:25.194959 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.194965 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.194971 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.194977 | orchestrator | 2026-04-04 01:06:25.194983 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-04 01:06:25.194989 | orchestrator | Saturday 04 April 2026 01:04:53 +0000 (0:00:00.720) 0:07:01.324 ******** 2026-04-04 01:06:25.194994 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:06:25.195000 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:06:25.195005 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:06:25.195010 | orchestrator | 2026-04-04 01:06:25.195016 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-04 01:06:25.195023 | orchestrator | Saturday 04 April 2026 01:05:15 +0000 (0:00:21.989) 0:07:23.314 ******** 2026-04-04 01:06:25.195029 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.195035 | orchestrator | 2026-04-04 01:06:25.195041 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-04 01:06:25.195047 | orchestrator | Saturday 04 April 2026 01:05:15 +0000 (0:00:00.264) 0:07:23.578 ******** 2026-04-04 01:06:25.195053 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.195059 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195065 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195071 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.195077 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195083 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-04 01:06:25.195089 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:06:25.195096 | orchestrator | 2026-04-04 01:06:25.195102 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-04 01:06:25.195108 | orchestrator | Saturday 04 April 2026 01:05:37 +0000 (0:00:21.969) 0:07:45.547 ******** 2026-04-04 01:06:25.195113 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.195119 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195126 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.195132 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.195138 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195144 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195151 | orchestrator | 2026-04-04 01:06:25.195157 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-04 01:06:25.195163 | orchestrator | Saturday 04 April 2026 01:05:45 +0000 (0:00:08.002) 0:07:53.550 ******** 2026-04-04 01:06:25.195168 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195174 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.195179 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.195184 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195190 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195197 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-04 01:06:25.195203 | orchestrator | 2026-04-04 01:06:25.195209 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-04 01:06:25.195216 | orchestrator | Saturday 04 April 2026 01:05:49 +0000 (0:00:03.420) 0:07:56.970 ******** 2026-04-04 01:06:25.195222 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:06:25.195228 | orchestrator | 2026-04-04 01:06:25.195234 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-04 01:06:25.195240 | orchestrator | Saturday 04 April 2026 01:06:03 +0000 (0:00:13.957) 0:08:10.928 ******** 2026-04-04 01:06:25.195252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:06:25.195259 | orchestrator | 2026-04-04 01:06:25.195265 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-04 01:06:25.195271 | orchestrator | Saturday 04 April 2026 01:06:04 +0000 (0:00:01.386) 0:08:12.315 ******** 2026-04-04 01:06:25.195278 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.195284 | orchestrator | 2026-04-04 01:06:25.195290 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-04 01:06:25.195297 | orchestrator | Saturday 04 April 2026 01:06:05 +0000 (0:00:01.372) 0:08:13.688 ******** 2026-04-04 01:06:25.195303 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:06:25.195310 | orchestrator | 2026-04-04 01:06:25.195320 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-04 01:06:25.195327 | orchestrator | Saturday 04 April 2026 01:06:17 +0000 (0:00:12.101) 0:08:25.790 ******** 2026-04-04 01:06:25.195334 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:06:25.195340 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:06:25.195347 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:06:25.195353 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:06:25.195360 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:06:25.195366 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:06:25.195373 | orchestrator | 2026-04-04 01:06:25.195380 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-04 01:06:25.195387 | orchestrator | 2026-04-04 01:06:25.195393 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-04 01:06:25.195407 | orchestrator | Saturday 04 April 2026 01:06:19 +0000 (0:00:01.707) 0:08:27.497 ******** 2026-04-04 01:06:25.195413 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:06:25.195419 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:06:25.195424 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:06:25.195430 | orchestrator | 2026-04-04 01:06:25.195436 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-04 01:06:25.195442 | orchestrator | 2026-04-04 01:06:25.195447 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-04 01:06:25.195453 | orchestrator | Saturday 04 April 2026 01:06:20 +0000 (0:00:01.139) 0:08:28.637 ******** 2026-04-04 01:06:25.195459 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195466 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195471 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195476 | orchestrator | 2026-04-04 01:06:25.195482 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-04 01:06:25.195488 | orchestrator | 2026-04-04 01:06:25.195494 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-04 01:06:25.195500 | orchestrator | Saturday 04 April 2026 01:06:21 +0000 (0:00:00.551) 0:08:29.188 ******** 2026-04-04 01:06:25.195505 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-04 01:06:25.195511 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-04 01:06:25.195516 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195522 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-04 01:06:25.195528 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-04 01:06:25.195534 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195539 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-04 01:06:25.195545 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-04 01:06:25.195551 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195557 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-04 01:06:25.195563 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:06:25.195570 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-04 01:06:25.195575 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195588 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-04 01:06:25.195595 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-04 01:06:25.195601 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195607 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-04 01:06:25.195614 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-04 01:06:25.195620 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195626 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:06:25.195630 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-04 01:06:25.195633 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-04 01:06:25.195637 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195641 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-04 01:06:25.195645 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-04 01:06:25.195649 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195653 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:06:25.195656 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-04 01:06:25.195660 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-04 01:06:25.195664 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195668 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-04 01:06:25.195671 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-04 01:06:25.195675 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195679 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195683 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195687 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-04 01:06:25.195690 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-04 01:06:25.195694 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-04 01:06:25.195698 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-04 01:06:25.195702 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-04 01:06:25.195705 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-04 01:06:25.195709 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195713 | orchestrator | 2026-04-04 01:06:25.195717 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-04 01:06:25.195721 | orchestrator | 2026-04-04 01:06:25.195725 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-04 01:06:25.195732 | orchestrator | Saturday 04 April 2026 01:06:22 +0000 (0:00:01.236) 0:08:30.425 ******** 2026-04-04 01:06:25.195736 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-04 01:06:25.195740 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-04 01:06:25.195744 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195748 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-04 01:06:25.195751 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-04 01:06:25.195755 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195759 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-04 01:06:25.195763 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-04 01:06:25.195766 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195770 | orchestrator | 2026-04-04 01:06:25.195904 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-04 01:06:25.195913 | orchestrator | 2026-04-04 01:06:25.195917 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-04 01:06:25.195921 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.710) 0:08:31.135 ******** 2026-04-04 01:06:25.195929 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195933 | orchestrator | 2026-04-04 01:06:25.195937 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-04 01:06:25.195941 | orchestrator | 2026-04-04 01:06:25.195944 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-04 01:06:25.195948 | orchestrator | Saturday 04 April 2026 01:06:23 +0000 (0:00:00.622) 0:08:31.757 ******** 2026-04-04 01:06:25.195952 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:06:25.195956 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:06:25.195959 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:06:25.195963 | orchestrator | 2026-04-04 01:06:25.195967 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:06:25.195971 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:06:25.195975 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-04 01:06:25.195979 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-04 01:06:25.195983 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-04 01:06:25.195987 | orchestrator | testbed-node-3 : ok=46  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-04 01:06:25.195991 | orchestrator | testbed-node-4 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-04 01:06:25.195995 | orchestrator | testbed-node-5 : ok=40  changed=28  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-04 01:06:25.195998 | orchestrator | 2026-04-04 01:06:25.196002 | orchestrator | 2026-04-04 01:06:25.196006 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:06:25.196010 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:00.560) 0:08:32.318 ******** 2026-04-04 01:06:25.196013 | orchestrator | =============================================================================== 2026-04-04 01:06:25.196017 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.89s 2026-04-04 01:06:25.196021 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 30.86s 2026-04-04 01:06:25.196025 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.99s 2026-04-04 01:06:25.196028 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.97s 2026-04-04 01:06:25.196032 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.90s 2026-04-04 01:06:25.196041 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.44s 2026-04-04 01:06:25.196045 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.39s 2026-04-04 01:06:25.196048 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.94s 2026-04-04 01:06:25.196052 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 15.40s 2026-04-04 01:06:25.196056 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.80s 2026-04-04 01:06:25.196059 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.28s 2026-04-04 01:06:25.196063 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.96s 2026-04-04 01:06:25.196067 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.65s 2026-04-04 01:06:25.196071 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.10s 2026-04-04 01:06:25.196077 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.94s 2026-04-04 01:06:25.196081 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.67s 2026-04-04 01:06:25.196085 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.60s 2026-04-04 01:06:25.196089 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.45s 2026-04-04 01:06:25.196095 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.00s 2026-04-04 01:06:25.196099 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.92s 2026-04-04 01:06:25.196103 | orchestrator | 2026-04-04 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:28.243062 | orchestrator | 2026-04-04 01:06:28 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:28.243122 | orchestrator | 2026-04-04 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:31.284392 | orchestrator | 2026-04-04 01:06:31 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:31.284462 | orchestrator | 2026-04-04 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:34.324877 | orchestrator | 2026-04-04 01:06:34 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:34.324969 | orchestrator | 2026-04-04 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:37.365040 | orchestrator | 2026-04-04 01:06:37 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:37.365102 | orchestrator | 2026-04-04 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:40.406422 | orchestrator | 2026-04-04 01:06:40 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:40.406515 | orchestrator | 2026-04-04 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:43.447084 | orchestrator | 2026-04-04 01:06:43 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:43.447179 | orchestrator | 2026-04-04 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:46.484012 | orchestrator | 2026-04-04 01:06:46 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:46.484069 | orchestrator | 2026-04-04 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:49.525685 | orchestrator | 2026-04-04 01:06:49 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:49.525774 | orchestrator | 2026-04-04 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:52.559613 | orchestrator | 2026-04-04 01:06:52 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:52.559667 | orchestrator | 2026-04-04 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:55.604083 | orchestrator | 2026-04-04 01:06:55 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:55.604138 | orchestrator | 2026-04-04 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:06:58.646191 | orchestrator | 2026-04-04 01:06:58 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:06:58.646265 | orchestrator | 2026-04-04 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:01.688901 | orchestrator | 2026-04-04 01:07:01 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:01.688952 | orchestrator | 2026-04-04 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:04.729149 | orchestrator | 2026-04-04 01:07:04 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:04.729232 | orchestrator | 2026-04-04 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:07.764492 | orchestrator | 2026-04-04 01:07:07 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:07.764565 | orchestrator | 2026-04-04 01:07:07 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:10.803062 | orchestrator | 2026-04-04 01:07:10 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:10.803167 | orchestrator | 2026-04-04 01:07:10 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:13.846900 | orchestrator | 2026-04-04 01:07:13 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:13.846979 | orchestrator | 2026-04-04 01:07:13 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:16.875260 | orchestrator | 2026-04-04 01:07:16 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:16.875348 | orchestrator | 2026-04-04 01:07:16 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:19.909630 | orchestrator | 2026-04-04 01:07:19 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:19.909895 | orchestrator | 2026-04-04 01:07:19 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:22.958762 | orchestrator | 2026-04-04 01:07:22 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:22.958825 | orchestrator | 2026-04-04 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:25.993291 | orchestrator | 2026-04-04 01:07:25 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:25.993383 | orchestrator | 2026-04-04 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:29.042722 | orchestrator | 2026-04-04 01:07:29 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:29.042783 | orchestrator | 2026-04-04 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:32.084931 | orchestrator | 2026-04-04 01:07:32 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:32.084997 | orchestrator | 2026-04-04 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:35.119583 | orchestrator | 2026-04-04 01:07:35 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:35.120132 | orchestrator | 2026-04-04 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:38.166249 | orchestrator | 2026-04-04 01:07:38 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:38.166324 | orchestrator | 2026-04-04 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:41.208091 | orchestrator | 2026-04-04 01:07:41 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:41.208138 | orchestrator | 2026-04-04 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:44.247410 | orchestrator | 2026-04-04 01:07:44 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:44.247503 | orchestrator | 2026-04-04 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:47.290266 | orchestrator | 2026-04-04 01:07:47 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:47.290350 | orchestrator | 2026-04-04 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:50.333051 | orchestrator | 2026-04-04 01:07:50 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:50.333098 | orchestrator | 2026-04-04 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:53.379703 | orchestrator | 2026-04-04 01:07:53 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:53.379784 | orchestrator | 2026-04-04 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:56.423742 | orchestrator | 2026-04-04 01:07:56 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:56.423793 | orchestrator | 2026-04-04 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:07:59.470299 | orchestrator | 2026-04-04 01:07:59 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:07:59.470381 | orchestrator | 2026-04-04 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:02.521083 | orchestrator | 2026-04-04 01:08:02 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:02.521142 | orchestrator | 2026-04-04 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:05.563586 | orchestrator | 2026-04-04 01:08:05 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:05.563717 | orchestrator | 2026-04-04 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:08.605224 | orchestrator | 2026-04-04 01:08:08 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:08.605295 | orchestrator | 2026-04-04 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:11.645270 | orchestrator | 2026-04-04 01:08:11 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:11.645348 | orchestrator | 2026-04-04 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:14.688683 | orchestrator | 2026-04-04 01:08:14 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:14.688746 | orchestrator | 2026-04-04 01:08:14 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:17.732940 | orchestrator | 2026-04-04 01:08:17 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:17.733029 | orchestrator | 2026-04-04 01:08:17 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:20.782752 | orchestrator | 2026-04-04 01:08:20 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:20.782821 | orchestrator | 2026-04-04 01:08:20 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:23.833024 | orchestrator | 2026-04-04 01:08:23 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:23.833076 | orchestrator | 2026-04-04 01:08:23 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:26.880742 | orchestrator | 2026-04-04 01:08:26 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:26.880790 | orchestrator | 2026-04-04 01:08:26 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:29.925681 | orchestrator | 2026-04-04 01:08:29 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:29.925767 | orchestrator | 2026-04-04 01:08:29 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:32.966377 | orchestrator | 2026-04-04 01:08:32 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:32.966454 | orchestrator | 2026-04-04 01:08:32 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:35.996985 | orchestrator | 2026-04-04 01:08:35 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:35.997043 | orchestrator | 2026-04-04 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:39.043273 | orchestrator | 2026-04-04 01:08:39 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:39.043335 | orchestrator | 2026-04-04 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:42.087958 | orchestrator | 2026-04-04 01:08:42 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:42.088015 | orchestrator | 2026-04-04 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:45.132259 | orchestrator | 2026-04-04 01:08:45 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:45.132325 | orchestrator | 2026-04-04 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:48.183854 | orchestrator | 2026-04-04 01:08:48 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state STARTED 2026-04-04 01:08:48.183921 | orchestrator | 2026-04-04 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-04-04 01:08:51.227371 | orchestrator | 2026-04-04 01:08:51 | INFO  | Task 13213f6e-e5b7-4b53-904f-a7658a0bb53f is in state SUCCESS 2026-04-04 01:08:51.228912 | orchestrator | 2026-04-04 01:08:51.228948 | orchestrator | 2026-04-04 01:08:51.228954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:08:51.228958 | orchestrator | 2026-04-04 01:08:51.228963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:08:51.228967 | orchestrator | Saturday 04 April 2026 01:04:31 +0000 (0:00:00.338) 0:00:00.338 ******** 2026-04-04 01:08:51.228971 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.228977 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:08:51.228984 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:08:51.228993 | orchestrator | 2026-04-04 01:08:51.229004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:08:51.229010 | orchestrator | Saturday 04 April 2026 01:04:31 +0000 (0:00:00.315) 0:00:00.653 ******** 2026-04-04 01:08:51.229016 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-04 01:08:51.229023 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-04 01:08:51.229030 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-04 01:08:51.229039 | orchestrator | 2026-04-04 01:08:51.229046 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-04 01:08:51.229054 | orchestrator | 2026-04-04 01:08:51.229062 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.229070 | orchestrator | Saturday 04 April 2026 01:04:32 +0000 (0:00:00.290) 0:00:00.944 ******** 2026-04-04 01:08:51.229076 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:08:51.229080 | orchestrator | 2026-04-04 01:08:51.229084 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-04 01:08:51.229088 | orchestrator | Saturday 04 April 2026 01:04:33 +0000 (0:00:00.950) 0:00:01.894 ******** 2026-04-04 01:08:51.229093 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-04 01:08:51.229096 | orchestrator | 2026-04-04 01:08:51.229100 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-04 01:08:51.229104 | orchestrator | Saturday 04 April 2026 01:04:36 +0000 (0:00:03.536) 0:00:05.431 ******** 2026-04-04 01:08:51.229108 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-04 01:08:51.229112 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-04 01:08:51.229129 | orchestrator | 2026-04-04 01:08:51.229133 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-04 01:08:51.229144 | orchestrator | Saturday 04 April 2026 01:04:42 +0000 (0:00:05.571) 0:00:11.002 ******** 2026-04-04 01:08:51.229148 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-04 01:08:51.229152 | orchestrator | 2026-04-04 01:08:51.229156 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-04 01:08:51.229160 | orchestrator | Saturday 04 April 2026 01:04:45 +0000 (0:00:02.946) 0:00:13.949 ******** 2026-04-04 01:08:51.229224 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-04 01:08:51.229295 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-04 01:08:51.229301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-04 01:08:51.229308 | orchestrator | 2026-04-04 01:08:51.229315 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-04 01:08:51.229322 | orchestrator | Saturday 04 April 2026 01:04:52 +0000 (0:00:07.302) 0:00:21.251 ******** 2026-04-04 01:08:51.229328 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-04 01:08:51.229334 | orchestrator | 2026-04-04 01:08:51.229341 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-04 01:08:51.229347 | orchestrator | Saturday 04 April 2026 01:04:55 +0000 (0:00:03.379) 0:00:24.630 ******** 2026-04-04 01:08:51.229353 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-04 01:08:51.229360 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-04 01:08:51.229367 | orchestrator | 2026-04-04 01:08:51.229373 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-04 01:08:51.229407 | orchestrator | Saturday 04 April 2026 01:05:02 +0000 (0:00:06.835) 0:00:31.466 ******** 2026-04-04 01:08:51.229555 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-04 01:08:51.229563 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-04 01:08:51.229567 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-04 01:08:51.229571 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-04 01:08:51.229575 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-04 01:08:51.229579 | orchestrator | 2026-04-04 01:08:51.229583 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.229586 | orchestrator | Saturday 04 April 2026 01:05:17 +0000 (0:00:14.795) 0:00:46.261 ******** 2026-04-04 01:08:51.229590 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:08:51.229595 | orchestrator | 2026-04-04 01:08:51.229598 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-04 01:08:51.229602 | orchestrator | Saturday 04 April 2026 01:05:18 +0000 (0:00:01.336) 0:00:47.598 ******** 2026-04-04 01:08:51.229606 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.229610 | orchestrator | 2026-04-04 01:08:51.229614 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-04 01:08:51.229618 | orchestrator | Saturday 04 April 2026 01:05:23 +0000 (0:00:04.553) 0:00:52.152 ******** 2026-04-04 01:08:51.229622 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.229626 | orchestrator | 2026-04-04 01:08:51.229629 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-04 01:08:51.229727 | orchestrator | Saturday 04 April 2026 01:05:27 +0000 (0:00:03.902) 0:00:56.054 ******** 2026-04-04 01:08:51.229759 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.229768 | orchestrator | 2026-04-04 01:08:51.229794 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-04 01:08:51.229799 | orchestrator | Saturday 04 April 2026 01:05:30 +0000 (0:00:02.884) 0:00:58.939 ******** 2026-04-04 01:08:51.229803 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-04 01:08:51.229808 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-04 01:08:51.229941 | orchestrator | 2026-04-04 01:08:51.229948 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-04 01:08:51.229952 | orchestrator | Saturday 04 April 2026 01:05:39 +0000 (0:00:09.397) 0:01:08.336 ******** 2026-04-04 01:08:51.229956 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-04 01:08:51.229960 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-04 01:08:51.229965 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-04 01:08:51.229970 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-04 01:08:51.229974 | orchestrator | 2026-04-04 01:08:51.229978 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-04 01:08:51.229982 | orchestrator | Saturday 04 April 2026 01:05:57 +0000 (0:00:17.724) 0:01:26.060 ******** 2026-04-04 01:08:51.229986 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.229990 | orchestrator | 2026-04-04 01:08:51.229994 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-04 01:08:51.229998 | orchestrator | Saturday 04 April 2026 01:06:02 +0000 (0:00:04.939) 0:01:31.000 ******** 2026-04-04 01:08:51.230002 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230006 | orchestrator | 2026-04-04 01:08:51.230010 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-04 01:08:51.230043 | orchestrator | Saturday 04 April 2026 01:06:08 +0000 (0:00:06.216) 0:01:37.217 ******** 2026-04-04 01:08:51.230047 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.230051 | orchestrator | 2026-04-04 01:08:51.230055 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-04 01:08:51.230064 | orchestrator | Saturday 04 April 2026 01:06:08 +0000 (0:00:00.199) 0:01:37.416 ******** 2026-04-04 01:08:51.230069 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230073 | orchestrator | 2026-04-04 01:08:51.230077 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.230081 | orchestrator | Saturday 04 April 2026 01:06:12 +0000 (0:00:04.049) 0:01:41.465 ******** 2026-04-04 01:08:51.230085 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:08:51.230089 | orchestrator | 2026-04-04 01:08:51.230093 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-04 01:08:51.230096 | orchestrator | Saturday 04 April 2026 01:06:13 +0000 (0:00:00.785) 0:01:42.250 ******** 2026-04-04 01:08:51.230100 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230104 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230108 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230112 | orchestrator | 2026-04-04 01:08:51.230116 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-04 01:08:51.230120 | orchestrator | Saturday 04 April 2026 01:06:19 +0000 (0:00:05.637) 0:01:47.887 ******** 2026-04-04 01:08:51.230124 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230128 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230131 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230135 | orchestrator | 2026-04-04 01:08:51.230139 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-04 01:08:51.230143 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:05.040) 0:01:52.928 ******** 2026-04-04 01:08:51.230147 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230151 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230154 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230158 | orchestrator | 2026-04-04 01:08:51.230162 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-04 01:08:51.230169 | orchestrator | Saturday 04 April 2026 01:06:24 +0000 (0:00:00.723) 0:01:53.651 ******** 2026-04-04 01:08:51.230173 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:08:51.230177 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230181 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:08:51.230184 | orchestrator | 2026-04-04 01:08:51.230188 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-04 01:08:51.230192 | orchestrator | Saturday 04 April 2026 01:06:26 +0000 (0:00:01.538) 0:01:55.190 ******** 2026-04-04 01:08:51.230196 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230200 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230204 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230208 | orchestrator | 2026-04-04 01:08:51.230211 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-04 01:08:51.230215 | orchestrator | Saturday 04 April 2026 01:06:27 +0000 (0:00:01.167) 0:01:56.358 ******** 2026-04-04 01:08:51.230219 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230223 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230227 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230231 | orchestrator | 2026-04-04 01:08:51.230235 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-04 01:08:51.230239 | orchestrator | Saturday 04 April 2026 01:06:28 +0000 (0:00:01.072) 0:01:57.430 ******** 2026-04-04 01:08:51.230243 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230247 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230251 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230255 | orchestrator | 2026-04-04 01:08:51.230286 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-04 01:08:51.230294 | orchestrator | Saturday 04 April 2026 01:06:30 +0000 (0:00:02.110) 0:01:59.541 ******** 2026-04-04 01:08:51.230301 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.230308 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.230314 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.230321 | orchestrator | 2026-04-04 01:08:51.230328 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-04 01:08:51.230334 | orchestrator | Saturday 04 April 2026 01:06:32 +0000 (0:00:01.749) 0:02:01.291 ******** 2026-04-04 01:08:51.230341 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230345 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:08:51.230349 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:08:51.230363 | orchestrator | 2026-04-04 01:08:51.230367 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-04 01:08:51.230371 | orchestrator | Saturday 04 April 2026 01:06:33 +0000 (0:00:00.657) 0:02:01.949 ******** 2026-04-04 01:08:51.230375 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:08:51.230379 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230383 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:08:51.230386 | orchestrator | 2026-04-04 01:08:51.230390 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.230394 | orchestrator | Saturday 04 April 2026 01:06:37 +0000 (0:00:03.781) 0:02:05.731 ******** 2026-04-04 01:08:51.230398 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:08:51.230402 | orchestrator | 2026-04-04 01:08:51.230406 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-04 01:08:51.230409 | orchestrator | Saturday 04 April 2026 01:06:37 +0000 (0:00:00.567) 0:02:06.298 ******** 2026-04-04 01:08:51.230413 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230417 | orchestrator | 2026-04-04 01:08:51.230421 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-04 01:08:51.230425 | orchestrator | Saturday 04 April 2026 01:06:41 +0000 (0:00:04.091) 0:02:10.390 ******** 2026-04-04 01:08:51.230429 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230433 | orchestrator | 2026-04-04 01:08:51.230436 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-04 01:08:51.230444 | orchestrator | Saturday 04 April 2026 01:06:45 +0000 (0:00:03.777) 0:02:14.168 ******** 2026-04-04 01:08:51.230449 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-04 01:08:51.230453 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-04 01:08:51.230456 | orchestrator | 2026-04-04 01:08:51.230463 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-04 01:08:51.230467 | orchestrator | Saturday 04 April 2026 01:06:51 +0000 (0:00:05.976) 0:02:20.145 ******** 2026-04-04 01:08:51.230471 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230476 | orchestrator | 2026-04-04 01:08:51.230483 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-04 01:08:51.230492 | orchestrator | Saturday 04 April 2026 01:06:54 +0000 (0:00:03.192) 0:02:23.337 ******** 2026-04-04 01:08:51.230499 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:08:51.230505 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:08:51.230512 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:08:51.230518 | orchestrator | 2026-04-04 01:08:51.230542 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-04 01:08:51.230547 | orchestrator | Saturday 04 April 2026 01:06:54 +0000 (0:00:00.251) 0:02:23.588 ******** 2026-04-04 01:08:51.230553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230676 | orchestrator | 2026-04-04 01:08:51.230680 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-04 01:08:51.230685 | orchestrator | Saturday 04 April 2026 01:06:57 +0000 (0:00:02.659) 0:02:26.248 ******** 2026-04-04 01:08:51.230690 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.230694 | orchestrator | 2026-04-04 01:08:51.230710 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-04 01:08:51.230715 | orchestrator | Saturday 04 April 2026 01:06:57 +0000 (0:00:00.122) 0:02:26.371 ******** 2026-04-04 01:08:51.230719 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.230724 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:08:51.230728 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:08:51.230735 | orchestrator | 2026-04-04 01:08:51.230740 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-04 01:08:51.230745 | orchestrator | Saturday 04 April 2026 01:06:57 +0000 (0:00:00.254) 0:02:26.625 ******** 2026-04-04 01:08:51.230750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.230756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.230762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.230776 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.230792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.230800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.230805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.230824 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:08:51.230828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.230843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.230851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.230872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.230879 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:08:51.230886 | orchestrator | 2026-04-04 01:08:51.230893 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.230899 | orchestrator | Saturday 04 April 2026 01:06:58 +0000 (0:00:00.599) 0:02:27.225 ******** 2026-04-04 01:08:51.230906 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:08:51.230913 | orchestrator | 2026-04-04 01:08:51.230921 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-04 01:08:51.230928 | orchestrator | Saturday 04 April 2026 01:06:59 +0000 (0:00:00.592) 0:02:27.817 ******** 2026-04-04 01:08:51.230935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.230974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.230989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.230996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231040 | orchestrator | 2026-04-04 01:08:51.231044 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-04 01:08:51.231048 | orchestrator | Saturday 04 April 2026 01:07:04 +0000 (0:00:04.966) 0:02:32.784 ******** 2026-04-04 01:08:51.231052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231077 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.231084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231107 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:08:51.231114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231138 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:08:51.231142 | orchestrator | 2026-04-04 01:08:51.231146 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-04 01:08:51.231150 | orchestrator | Saturday 04 April 2026 01:07:04 +0000 (0:00:00.581) 0:02:33.365 ******** 2026-04-04 01:08:51.231154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231180 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.231185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231215 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:08:51.231219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-04 01:08:51.231225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-04 01:08:51.231229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-04 01:08:51.231240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-04 01:08:51.231244 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:08:51.231249 | orchestrator | 2026-04-04 01:08:51.231257 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-04 01:08:51.231265 | orchestrator | Saturday 04 April 2026 01:07:05 +0000 (0:00:00.850) 0:02:34.216 ******** 2026-04-04 01:08:51.231279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231412 | orchestrator | 2026-04-04 01:08:51.231419 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-04 01:08:51.231430 | orchestrator | Saturday 04 April 2026 01:07:10 +0000 (0:00:04.961) 0:02:39.178 ******** 2026-04-04 01:08:51.231436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:08:51.231441 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:08:51.231445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-04 01:08:51.231448 | orchestrator | 2026-04-04 01:08:51.231455 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-04 01:08:51.231459 | orchestrator | Saturday 04 April 2026 01:07:11 +0000 (0:00:01.528) 0:02:40.706 ******** 2026-04-04 01:08:51.231463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231596 | orchestrator | 2026-04-04 01:08:51.231599 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-04 01:08:51.231603 | orchestrator | Saturday 04 April 2026 01:07:27 +0000 (0:00:15.216) 0:02:55.922 ******** 2026-04-04 01:08:51.231607 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231611 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.231615 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.231619 | orchestrator | 2026-04-04 01:08:51.231623 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-04 01:08:51.231627 | orchestrator | Saturday 04 April 2026 01:07:28 +0000 (0:00:01.624) 0:02:57.547 ******** 2026-04-04 01:08:51.231631 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231634 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231641 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231645 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231649 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231653 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231659 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231663 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231667 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231671 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231675 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231679 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231683 | orchestrator | 2026-04-04 01:08:51.231687 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-04 01:08:51.231691 | orchestrator | Saturday 04 April 2026 01:07:33 +0000 (0:00:04.371) 0:03:01.918 ******** 2026-04-04 01:08:51.231694 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231698 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231702 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231706 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231710 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231714 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231717 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231721 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231725 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231729 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231733 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231736 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231740 | orchestrator | 2026-04-04 01:08:51.231744 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-04 01:08:51.231748 | orchestrator | Saturday 04 April 2026 01:07:38 +0000 (0:00:04.960) 0:03:06.878 ******** 2026-04-04 01:08:51.231754 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231757 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231761 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-04 01:08:51.231765 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231769 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231773 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-04 01:08:51.231776 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231780 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231784 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-04 01:08:51.231788 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231791 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231795 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-04 01:08:51.231799 | orchestrator | 2026-04-04 01:08:51.231803 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-04 01:08:51.231807 | orchestrator | Saturday 04 April 2026 01:07:43 +0000 (0:00:04.857) 0:03:11.736 ******** 2026-04-04 01:08:51.231811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-04 01:08:51.231835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-04 01:08:51.231847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-04 01:08:51.231897 | orchestrator | 2026-04-04 01:08:51.231901 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-04 01:08:51.231905 | orchestrator | Saturday 04 April 2026 01:07:47 +0000 (0:00:04.161) 0:03:15.897 ******** 2026-04-04 01:08:51.231908 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:08:51.231912 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:08:51.231916 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:08:51.231920 | orchestrator | 2026-04-04 01:08:51.231924 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-04 01:08:51.231928 | orchestrator | Saturday 04 April 2026 01:07:47 +0000 (0:00:00.577) 0:03:16.475 ******** 2026-04-04 01:08:51.231932 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231936 | orchestrator | 2026-04-04 01:08:51.231940 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-04 01:08:51.231943 | orchestrator | Saturday 04 April 2026 01:07:49 +0000 (0:00:02.217) 0:03:18.692 ******** 2026-04-04 01:08:51.231947 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231951 | orchestrator | 2026-04-04 01:08:51.231955 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-04 01:08:51.231959 | orchestrator | Saturday 04 April 2026 01:07:52 +0000 (0:00:02.160) 0:03:20.853 ******** 2026-04-04 01:08:51.231963 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231967 | orchestrator | 2026-04-04 01:08:51.231971 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-04 01:08:51.231974 | orchestrator | Saturday 04 April 2026 01:07:54 +0000 (0:00:02.848) 0:03:23.701 ******** 2026-04-04 01:08:51.231978 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231982 | orchestrator | 2026-04-04 01:08:51.231986 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-04 01:08:51.231990 | orchestrator | Saturday 04 April 2026 01:07:57 +0000 (0:00:02.210) 0:03:25.912 ******** 2026-04-04 01:08:51.231994 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.231998 | orchestrator | 2026-04-04 01:08:51.232002 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:08:51.232005 | orchestrator | Saturday 04 April 2026 01:08:18 +0000 (0:00:21.563) 0:03:47.475 ******** 2026-04-04 01:08:51.232009 | orchestrator | 2026-04-04 01:08:51.232013 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:08:51.232017 | orchestrator | Saturday 04 April 2026 01:08:18 +0000 (0:00:00.067) 0:03:47.543 ******** 2026-04-04 01:08:51.232021 | orchestrator | 2026-04-04 01:08:51.232027 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-04 01:08:51.232034 | orchestrator | Saturday 04 April 2026 01:08:18 +0000 (0:00:00.066) 0:03:47.609 ******** 2026-04-04 01:08:51.232038 | orchestrator | 2026-04-04 01:08:51.232042 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-04 01:08:51.232046 | orchestrator | Saturday 04 April 2026 01:08:18 +0000 (0:00:00.065) 0:03:47.674 ******** 2026-04-04 01:08:51.232049 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.232053 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.232057 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.232061 | orchestrator | 2026-04-04 01:08:51.232065 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-04 01:08:51.232069 | orchestrator | Saturday 04 April 2026 01:08:27 +0000 (0:00:08.791) 0:03:56.466 ******** 2026-04-04 01:08:51.232073 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.232076 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.232080 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.232084 | orchestrator | 2026-04-04 01:08:51.232088 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-04 01:08:51.232092 | orchestrator | Saturday 04 April 2026 01:08:34 +0000 (0:00:06.816) 0:04:03.283 ******** 2026-04-04 01:08:51.232096 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.232100 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.232104 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.232108 | orchestrator | 2026-04-04 01:08:51.232111 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-04 01:08:51.232115 | orchestrator | Saturday 04 April 2026 01:08:39 +0000 (0:00:04.576) 0:04:07.859 ******** 2026-04-04 01:08:51.232119 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.232123 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.232127 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.232131 | orchestrator | 2026-04-04 01:08:51.232134 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-04 01:08:51.232138 | orchestrator | Saturday 04 April 2026 01:08:44 +0000 (0:00:04.909) 0:04:12.769 ******** 2026-04-04 01:08:51.232142 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:08:51.232146 | orchestrator | changed: [testbed-node-1] 2026-04-04 01:08:51.232149 | orchestrator | changed: [testbed-node-2] 2026-04-04 01:08:51.232153 | orchestrator | 2026-04-04 01:08:51.232157 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:08:51.232161 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-04 01:08:51.232166 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:08:51.232170 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-04 01:08:51.232173 | orchestrator | 2026-04-04 01:08:51.232177 | orchestrator | 2026-04-04 01:08:51.232181 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:08:51.232185 | orchestrator | Saturday 04 April 2026 01:08:49 +0000 (0:00:05.123) 0:04:17.892 ******** 2026-04-04 01:08:51.232191 | orchestrator | =============================================================================== 2026-04-04 01:08:51.232195 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.56s 2026-04-04 01:08:51.232199 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.72s 2026-04-04 01:08:51.232202 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.22s 2026-04-04 01:08:51.232206 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.80s 2026-04-04 01:08:51.232210 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.40s 2026-04-04 01:08:51.232217 | orchestrator | octavia : Restart octavia-api container --------------------------------- 8.79s 2026-04-04 01:08:51.232221 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.30s 2026-04-04 01:08:51.232225 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.84s 2026-04-04 01:08:51.232229 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.82s 2026-04-04 01:08:51.232233 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.22s 2026-04-04 01:08:51.232237 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.98s 2026-04-04 01:08:51.232241 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.64s 2026-04-04 01:08:51.232245 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.57s 2026-04-04 01:08:51.232249 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.12s 2026-04-04 01:08:51.232253 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.04s 2026-04-04 01:08:51.232257 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.97s 2026-04-04 01:08:51.232261 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.96s 2026-04-04 01:08:51.232265 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.96s 2026-04-04 01:08:51.232269 | orchestrator | octavia : Create loadbalancer management network ------------------------ 4.94s 2026-04-04 01:08:51.232272 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 4.91s 2026-04-04 01:08:51.232276 | orchestrator | 2026-04-04 01:08:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:08:54.272285 | orchestrator | 2026-04-04 01:08:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:08:57.316920 | orchestrator | 2026-04-04 01:08:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:00.354664 | orchestrator | 2026-04-04 01:09:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:03.394834 | orchestrator | 2026-04-04 01:09:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:06.439322 | orchestrator | 2026-04-04 01:09:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:09.478856 | orchestrator | 2026-04-04 01:09:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:12.521989 | orchestrator | 2026-04-04 01:09:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:15.563267 | orchestrator | 2026-04-04 01:09:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:18.607658 | orchestrator | 2026-04-04 01:09:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:21.647296 | orchestrator | 2026-04-04 01:09:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:24.680567 | orchestrator | 2026-04-04 01:09:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:27.722690 | orchestrator | 2026-04-04 01:09:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:30.759256 | orchestrator | 2026-04-04 01:09:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:33.788125 | orchestrator | 2026-04-04 01:09:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:36.818880 | orchestrator | 2026-04-04 01:09:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:39.859230 | orchestrator | 2026-04-04 01:09:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:42.903592 | orchestrator | 2026-04-04 01:09:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:45.944479 | orchestrator | 2026-04-04 01:09:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:48.987754 | orchestrator | 2026-04-04 01:09:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-04-04 01:09:52.027403 | orchestrator | 2026-04-04 01:09:52.256395 | orchestrator | 2026-04-04 01:09:52.262974 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Apr 4 01:09:52 UTC 2026 2026-04-04 01:09:52.263077 | orchestrator | 2026-04-04 01:09:52.751054 | orchestrator | ok: Runtime: 0:31:04.888093 2026-04-04 01:09:53.017840 | 2026-04-04 01:09:53.018012 | TASK [Bootstrap services] 2026-04-04 01:09:53.817669 | orchestrator | 2026-04-04 01:09:53.817798 | orchestrator | # BOOTSTRAP 2026-04-04 01:09:53.817808 | orchestrator | 2026-04-04 01:09:53.817813 | orchestrator | + set -e 2026-04-04 01:09:53.817818 | orchestrator | + echo 2026-04-04 01:09:53.817823 | orchestrator | + echo '# BOOTSTRAP' 2026-04-04 01:09:53.817830 | orchestrator | + echo 2026-04-04 01:09:53.817850 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-04 01:09:53.825917 | orchestrator | + set -e 2026-04-04 01:09:53.826163 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-04 01:09:58.700702 | orchestrator | 2026-04-04 01:09:58 | INFO  | It takes a moment until task a9d78618-ffaf-4442-a6f4-57c6a75362a7 (flavor-manager) has been started and output is visible here. 2026-04-04 01:10:08.766145 | orchestrator | 2026-04-04 01:10:03 | INFO  | Flavor SCS-1L-1 created 2026-04-04 01:10:08.766272 | orchestrator | 2026-04-04 01:10:04 | INFO  | Flavor SCS-1L-1-5 created 2026-04-04 01:10:08.766287 | orchestrator | 2026-04-04 01:10:04 | INFO  | Flavor SCS-1V-2 created 2026-04-04 01:10:08.766295 | orchestrator | 2026-04-04 01:10:04 | INFO  | Flavor SCS-1V-2-5 created 2026-04-04 01:10:08.766302 | orchestrator | 2026-04-04 01:10:04 | INFO  | Flavor SCS-1V-4 created 2026-04-04 01:10:08.766308 | orchestrator | 2026-04-04 01:10:04 | INFO  | Flavor SCS-1V-4-10 created 2026-04-04 01:10:08.766315 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-1V-8 created 2026-04-04 01:10:08.766322 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-1V-8-20 created 2026-04-04 01:10:08.766335 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-2V-4 created 2026-04-04 01:10:08.766341 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-2V-4-10 created 2026-04-04 01:10:08.766348 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-2V-8 created 2026-04-04 01:10:08.766355 | orchestrator | 2026-04-04 01:10:05 | INFO  | Flavor SCS-2V-8-20 created 2026-04-04 01:10:08.766361 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-2V-16 created 2026-04-04 01:10:08.766368 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-2V-16-50 created 2026-04-04 01:10:08.766374 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-4V-8 created 2026-04-04 01:10:08.766380 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-4V-8-20 created 2026-04-04 01:10:08.766387 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-4V-16 created 2026-04-04 01:10:08.766394 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-4V-16-50 created 2026-04-04 01:10:08.766400 | orchestrator | 2026-04-04 01:10:06 | INFO  | Flavor SCS-4V-32 created 2026-04-04 01:10:08.766407 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-4V-32-100 created 2026-04-04 01:10:08.766414 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-8V-16 created 2026-04-04 01:10:08.766420 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-8V-16-50 created 2026-04-04 01:10:08.766427 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-8V-32 created 2026-04-04 01:10:08.766498 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-8V-32-100 created 2026-04-04 01:10:08.766508 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-16V-32 created 2026-04-04 01:10:08.766515 | orchestrator | 2026-04-04 01:10:07 | INFO  | Flavor SCS-16V-32-100 created 2026-04-04 01:10:08.766522 | orchestrator | 2026-04-04 01:10:08 | INFO  | Flavor SCS-2V-4-20s created 2026-04-04 01:10:08.766528 | orchestrator | 2026-04-04 01:10:08 | INFO  | Flavor SCS-4V-8-50s created 2026-04-04 01:10:08.766534 | orchestrator | 2026-04-04 01:10:08 | INFO  | Flavor SCS-4V-16-100s created 2026-04-04 01:10:08.766541 | orchestrator | 2026-04-04 01:10:08 | INFO  | Flavor SCS-8V-32-100s created 2026-04-04 01:10:10.294065 | orchestrator | 2026-04-04 01:10:10 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-04 01:10:20.412898 | orchestrator | 2026-04-04 01:10:20 | INFO  | Prepare task for execution of bootstrap-basic. 2026-04-04 01:10:20.506425 | orchestrator | 2026-04-04 01:10:20 | INFO  | Task f9c16fdb-732c-4036-9d59-a62bbda3aced (bootstrap-basic) was prepared for execution. 2026-04-04 01:10:20.506593 | orchestrator | 2026-04-04 01:10:20 | INFO  | It takes a moment until task f9c16fdb-732c-4036-9d59-a62bbda3aced (bootstrap-basic) has been started and output is visible here. 2026-04-04 01:11:07.860206 | orchestrator | 2026-04-04 01:11:07.860263 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-04 01:11:07.860270 | orchestrator | 2026-04-04 01:11:07.860275 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-04 01:11:07.860280 | orchestrator | Saturday 04 April 2026 01:10:23 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-04-04 01:11:07.860285 | orchestrator | ok: [localhost] 2026-04-04 01:11:07.860290 | orchestrator | 2026-04-04 01:11:07.860295 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-04 01:11:07.860299 | orchestrator | Saturday 04 April 2026 01:10:25 +0000 (0:00:02.080) 0:00:02.232 ******** 2026-04-04 01:11:07.860306 | orchestrator | ok: [localhost] 2026-04-04 01:11:07.860310 | orchestrator | 2026-04-04 01:11:07.860315 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-04 01:11:07.860319 | orchestrator | Saturday 04 April 2026 01:10:35 +0000 (0:00:10.061) 0:00:12.294 ******** 2026-04-04 01:11:07.860324 | orchestrator | changed: [localhost] 2026-04-04 01:11:07.860329 | orchestrator | 2026-04-04 01:11:07.860334 | orchestrator | TASK [Create public network] *************************************************** 2026-04-04 01:11:07.860339 | orchestrator | Saturday 04 April 2026 01:10:43 +0000 (0:00:08.016) 0:00:20.311 ******** 2026-04-04 01:11:07.860343 | orchestrator | changed: [localhost] 2026-04-04 01:11:07.860348 | orchestrator | 2026-04-04 01:11:07.860354 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-04 01:11:07.860359 | orchestrator | Saturday 04 April 2026 01:10:49 +0000 (0:00:05.826) 0:00:26.137 ******** 2026-04-04 01:11:07.860364 | orchestrator | changed: [localhost] 2026-04-04 01:11:07.860368 | orchestrator | 2026-04-04 01:11:07.860373 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-04 01:11:07.860378 | orchestrator | Saturday 04 April 2026 01:10:56 +0000 (0:00:06.380) 0:00:32.518 ******** 2026-04-04 01:11:07.860382 | orchestrator | changed: [localhost] 2026-04-04 01:11:07.860387 | orchestrator | 2026-04-04 01:11:07.860391 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-04 01:11:07.860396 | orchestrator | Saturday 04 April 2026 01:11:00 +0000 (0:00:04.309) 0:00:36.827 ******** 2026-04-04 01:11:07.860401 | orchestrator | changed: [localhost] 2026-04-04 01:11:07.860405 | orchestrator | 2026-04-04 01:11:07.860410 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-04 01:11:07.860419 | orchestrator | Saturday 04 April 2026 01:11:04 +0000 (0:00:03.629) 0:00:40.457 ******** 2026-04-04 01:11:07.860425 | orchestrator | ok: [localhost] 2026-04-04 01:11:07.860429 | orchestrator | 2026-04-04 01:11:07.860434 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:11:07.860439 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-04 01:11:07.860444 | orchestrator | 2026-04-04 01:11:07.860448 | orchestrator | 2026-04-04 01:11:07.860453 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:11:07.860458 | orchestrator | Saturday 04 April 2026 01:11:07 +0000 (0:00:03.549) 0:00:44.006 ******** 2026-04-04 01:11:07.860488 | orchestrator | =============================================================================== 2026-04-04 01:11:07.860494 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.06s 2026-04-04 01:11:07.860509 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.02s 2026-04-04 01:11:07.860514 | orchestrator | Set public network to default ------------------------------------------- 6.38s 2026-04-04 01:11:07.860519 | orchestrator | Create public network --------------------------------------------------- 5.83s 2026-04-04 01:11:07.860523 | orchestrator | Create public subnet ---------------------------------------------------- 4.31s 2026-04-04 01:11:07.860528 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.63s 2026-04-04 01:11:07.860533 | orchestrator | Create manager role ----------------------------------------------------- 3.55s 2026-04-04 01:11:07.860537 | orchestrator | Gathering Facts --------------------------------------------------------- 2.08s 2026-04-04 01:11:09.795319 | orchestrator | 2026-04-04 01:11:09 | INFO  | It takes a moment until task 7b7fb642-c419-4d3a-a365-6a0cfd928ef2 (image-manager) has been started and output is visible here. 2026-04-04 01:11:50.205148 | orchestrator | 2026-04-04 01:11:12 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-04 01:11:50.205242 | orchestrator | 2026-04-04 01:11:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-04 01:11:50.205253 | orchestrator | 2026-04-04 01:11:12 | INFO  | Importing image Cirros 0.6.2 2026-04-04 01:11:50.205261 | orchestrator | 2026-04-04 01:11:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-04 01:11:50.205270 | orchestrator | 2026-04-04 01:11:15 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:11:50.205278 | orchestrator | 2026-04-04 01:11:17 | INFO  | Waiting for import to complete... 2026-04-04 01:11:50.205285 | orchestrator | 2026-04-04 01:11:27 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-04 01:11:50.205293 | orchestrator | 2026-04-04 01:11:27 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-04 01:11:50.205301 | orchestrator | 2026-04-04 01:11:27 | INFO  | Setting internal_version = 0.6.2 2026-04-04 01:11:50.205308 | orchestrator | 2026-04-04 01:11:27 | INFO  | Setting image_original_user = cirros 2026-04-04 01:11:50.205316 | orchestrator | 2026-04-04 01:11:27 | INFO  | Adding tag os:cirros 2026-04-04 01:11:50.205323 | orchestrator | 2026-04-04 01:11:27 | INFO  | Setting property architecture: x86_64 2026-04-04 01:11:50.205330 | orchestrator | 2026-04-04 01:11:27 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:11:50.205337 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:11:50.205344 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:11:50.205352 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:11:50.205364 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:11:50.205384 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property os_distro: cirros 2026-04-04 01:11:50.205396 | orchestrator | 2026-04-04 01:11:28 | INFO  | Setting property os_purpose: minimal 2026-04-04 01:11:50.205408 | orchestrator | 2026-04-04 01:11:29 | INFO  | Setting property replace_frequency: never 2026-04-04 01:11:50.205420 | orchestrator | 2026-04-04 01:11:29 | INFO  | Setting property uuid_validity: none 2026-04-04 01:11:50.205431 | orchestrator | 2026-04-04 01:11:29 | INFO  | Setting property provided_until: none 2026-04-04 01:11:50.205443 | orchestrator | 2026-04-04 01:11:29 | INFO  | Setting property image_description: Cirros 2026-04-04 01:11:50.205455 | orchestrator | 2026-04-04 01:11:29 | INFO  | Setting property image_name: Cirros 2026-04-04 01:11:50.205568 | orchestrator | 2026-04-04 01:11:30 | INFO  | Setting property internal_version: 0.6.2 2026-04-04 01:11:50.205583 | orchestrator | 2026-04-04 01:11:30 | INFO  | Setting property image_original_user: cirros 2026-04-04 01:11:50.205596 | orchestrator | 2026-04-04 01:11:30 | INFO  | Setting property os_version: 0.6.2 2026-04-04 01:11:50.205610 | orchestrator | 2026-04-04 01:11:30 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-04 01:11:50.205624 | orchestrator | 2026-04-04 01:11:30 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-04 01:11:50.205636 | orchestrator | 2026-04-04 01:11:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-04 01:11:50.205643 | orchestrator | 2026-04-04 01:11:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-04 01:11:50.205653 | orchestrator | 2026-04-04 01:11:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-04 01:11:50.205661 | orchestrator | 2026-04-04 01:11:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-04 01:11:50.205668 | orchestrator | 2026-04-04 01:11:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-04 01:11:50.205675 | orchestrator | 2026-04-04 01:11:31 | INFO  | Importing image Cirros 0.6.3 2026-04-04 01:11:50.205682 | orchestrator | 2026-04-04 01:11:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-04 01:11:50.205689 | orchestrator | 2026-04-04 01:11:31 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:11:50.205696 | orchestrator | 2026-04-04 01:11:33 | INFO  | Waiting for import to complete... 2026-04-04 01:11:50.205721 | orchestrator | 2026-04-04 01:11:43 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-04 01:11:50.205729 | orchestrator | 2026-04-04 01:11:44 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-04 01:11:50.205736 | orchestrator | 2026-04-04 01:11:44 | INFO  | Setting internal_version = 0.6.3 2026-04-04 01:11:50.205743 | orchestrator | 2026-04-04 01:11:44 | INFO  | Setting image_original_user = cirros 2026-04-04 01:11:50.205750 | orchestrator | 2026-04-04 01:11:44 | INFO  | Adding tag os:cirros 2026-04-04 01:11:50.205757 | orchestrator | 2026-04-04 01:11:44 | INFO  | Setting property architecture: x86_64 2026-04-04 01:11:50.205765 | orchestrator | 2026-04-04 01:11:45 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:11:50.205772 | orchestrator | 2026-04-04 01:11:45 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:11:50.205779 | orchestrator | 2026-04-04 01:11:45 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:11:50.205786 | orchestrator | 2026-04-04 01:11:45 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:11:50.205793 | orchestrator | 2026-04-04 01:11:45 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:11:50.205800 | orchestrator | 2026-04-04 01:11:46 | INFO  | Setting property os_distro: cirros 2026-04-04 01:11:50.205808 | orchestrator | 2026-04-04 01:11:46 | INFO  | Setting property os_purpose: minimal 2026-04-04 01:11:50.205815 | orchestrator | 2026-04-04 01:11:46 | INFO  | Setting property replace_frequency: never 2026-04-04 01:11:50.205822 | orchestrator | 2026-04-04 01:11:46 | INFO  | Setting property uuid_validity: none 2026-04-04 01:11:50.205830 | orchestrator | 2026-04-04 01:11:47 | INFO  | Setting property provided_until: none 2026-04-04 01:11:50.205837 | orchestrator | 2026-04-04 01:11:47 | INFO  | Setting property image_description: Cirros 2026-04-04 01:11:50.205851 | orchestrator | 2026-04-04 01:11:47 | INFO  | Setting property image_name: Cirros 2026-04-04 01:11:50.205858 | orchestrator | 2026-04-04 01:11:47 | INFO  | Setting property internal_version: 0.6.3 2026-04-04 01:11:50.205865 | orchestrator | 2026-04-04 01:11:48 | INFO  | Setting property image_original_user: cirros 2026-04-04 01:11:50.205873 | orchestrator | 2026-04-04 01:11:48 | INFO  | Setting property os_version: 0.6.3 2026-04-04 01:11:50.205880 | orchestrator | 2026-04-04 01:11:48 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-04 01:11:50.205887 | orchestrator | 2026-04-04 01:11:49 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-04 01:11:50.205894 | orchestrator | 2026-04-04 01:11:49 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-04 01:11:50.205905 | orchestrator | 2026-04-04 01:11:49 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-04 01:11:50.205917 | orchestrator | 2026-04-04 01:11:49 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-04 01:11:50.470674 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-04-04 01:11:52.351680 | orchestrator | 2026-04-04 01:11:52 | INFO  | date: 2026-04-03 2026-04-04 01:11:52.351766 | orchestrator | 2026-04-04 01:11:52 | INFO  | image: octavia-amphora-haproxy-2024.2.20260403.qcow2 2026-04-04 01:11:52.351788 | orchestrator | 2026-04-04 01:11:52 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260403.qcow2 2026-04-04 01:11:52.351795 | orchestrator | 2026-04-04 01:11:52 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260403.qcow2.CHECKSUM 2026-04-04 01:11:52.532526 | orchestrator | 2026-04-04 01:11:52 | INFO  | checksum: 9296772343c1db2698e624621b60df9166f030ac326c14002db992c2b8a03de2 2026-04-04 01:11:52.615085 | orchestrator | 2026-04-04 01:11:52 | INFO  | It takes a moment until task aa5eb978-ca50-4c3e-8cf4-b7f076a442e4 (image-manager) has been started and output is visible here. 2026-04-04 01:12:55.593180 | orchestrator | 2026-04-04 01:11:54 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:12:55.593290 | orchestrator | 2026-04-04 01:11:54 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260403.qcow2: 200 2026-04-04 01:12:55.593304 | orchestrator | 2026-04-04 01:11:54 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-03 2026-04-04 01:12:55.593311 | orchestrator | 2026-04-04 01:11:54 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260403.qcow2 2026-04-04 01:12:55.593320 | orchestrator | 2026-04-04 01:11:57 | INFO  | Waiting for image to leave queued state... 2026-04-04 01:12:55.593326 | orchestrator | 2026-04-04 01:11:59 | INFO  | Waiting for import to complete... 2026-04-04 01:12:55.593330 | orchestrator | 2026-04-04 01:12:09 | INFO  | Waiting for import to complete... 2026-04-04 01:12:55.593335 | orchestrator | 2026-04-04 01:12:19 | INFO  | Waiting for import to complete... 2026-04-04 01:12:55.593339 | orchestrator | 2026-04-04 01:12:29 | INFO  | Waiting for import to complete... 2026-04-04 01:12:55.593345 | orchestrator | 2026-04-04 01:12:39 | INFO  | Waiting for import to complete... 2026-04-04 01:12:55.593350 | orchestrator | 2026-04-04 01:12:49 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-03' successfully completed, reloading images 2026-04-04 01:12:55.593373 | orchestrator | 2026-04-04 01:12:50 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:12:55.593378 | orchestrator | 2026-04-04 01:12:50 | INFO  | Setting internal_version = 2026-04-03 2026-04-04 01:12:55.593382 | orchestrator | 2026-04-04 01:12:50 | INFO  | Setting image_original_user = ubuntu 2026-04-04 01:12:55.593386 | orchestrator | 2026-04-04 01:12:50 | INFO  | Adding tag amphora 2026-04-04 01:12:55.593391 | orchestrator | 2026-04-04 01:12:50 | INFO  | Adding tag os:ubuntu 2026-04-04 01:12:55.593394 | orchestrator | 2026-04-04 01:12:50 | INFO  | Setting property architecture: x86_64 2026-04-04 01:12:55.593398 | orchestrator | 2026-04-04 01:12:50 | INFO  | Setting property hw_disk_bus: scsi 2026-04-04 01:12:55.593402 | orchestrator | 2026-04-04 01:12:51 | INFO  | Setting property hw_rng_model: virtio 2026-04-04 01:12:55.593407 | orchestrator | 2026-04-04 01:12:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-04 01:12:55.593411 | orchestrator | 2026-04-04 01:12:51 | INFO  | Setting property hw_watchdog_action: reset 2026-04-04 01:12:55.593414 | orchestrator | 2026-04-04 01:12:51 | INFO  | Setting property hypervisor_type: qemu 2026-04-04 01:12:55.593418 | orchestrator | 2026-04-04 01:12:51 | INFO  | Setting property os_distro: ubuntu 2026-04-04 01:12:55.593422 | orchestrator | 2026-04-04 01:12:52 | INFO  | Setting property replace_frequency: quarterly 2026-04-04 01:12:55.593426 | orchestrator | 2026-04-04 01:12:52 | INFO  | Setting property uuid_validity: last-1 2026-04-04 01:12:55.593429 | orchestrator | 2026-04-04 01:12:52 | INFO  | Setting property provided_until: none 2026-04-04 01:12:55.593433 | orchestrator | 2026-04-04 01:12:52 | INFO  | Setting property os_purpose: network 2026-04-04 01:12:55.593437 | orchestrator | 2026-04-04 01:12:53 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-04 01:12:55.593453 | orchestrator | 2026-04-04 01:12:53 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-04 01:12:55.593457 | orchestrator | 2026-04-04 01:12:53 | INFO  | Setting property internal_version: 2026-04-03 2026-04-04 01:12:55.593460 | orchestrator | 2026-04-04 01:12:53 | INFO  | Setting property image_original_user: ubuntu 2026-04-04 01:12:55.593464 | orchestrator | 2026-04-04 01:12:54 | INFO  | Setting property os_version: 2026-04-03 2026-04-04 01:12:55.593468 | orchestrator | 2026-04-04 01:12:54 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260403.qcow2 2026-04-04 01:12:55.593472 | orchestrator | 2026-04-04 01:12:54 | INFO  | Setting property image_build_date: 2026-04-03 2026-04-04 01:12:55.593476 | orchestrator | 2026-04-04 01:12:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:12:55.593480 | orchestrator | 2026-04-04 01:12:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-03' 2026-04-04 01:12:55.593483 | orchestrator | 2026-04-04 01:12:55 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-04 01:12:55.593525 | orchestrator | 2026-04-04 01:12:55 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-04 01:12:55.593531 | orchestrator | 2026-04-04 01:12:55 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-04 01:12:55.593537 | orchestrator | 2026-04-04 01:12:55 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-04 01:12:56.159469 | orchestrator | ok: Runtime: 0:03:02.384019 2026-04-04 01:12:56.182686 | 2026-04-04 01:12:56.182828 | TASK [Run checks] 2026-04-04 01:12:56.946644 | orchestrator | + set -e 2026-04-04 01:12:56.946799 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:12:56.946814 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:12:56.946823 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:12:56.946829 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:12:56.946833 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:12:56.946839 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:12:56.947855 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:12:56.954435 | orchestrator | 2026-04-04 01:12:56.954550 | orchestrator | # CHECK 2026-04-04 01:12:56.954562 | orchestrator | 2026-04-04 01:12:56.954570 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:12:56.954583 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:12:56.954587 | orchestrator | + echo 2026-04-04 01:12:56.954591 | orchestrator | + echo '# CHECK' 2026-04-04 01:12:56.954596 | orchestrator | + echo 2026-04-04 01:12:56.954603 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:12:56.955293 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:12:57.019113 | orchestrator | 2026-04-04 01:12:57.019197 | orchestrator | ## Containers @ testbed-manager 2026-04-04 01:12:57.019207 | orchestrator | 2026-04-04 01:12:57.019216 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:12:57.019222 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:12:57.019229 | orchestrator | + echo 2026-04-04 01:12:57.019236 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-04 01:12:57.019242 | orchestrator | + echo 2026-04-04 01:12:57.019249 | orchestrator | + osism container testbed-manager ps 2026-04-04 01:12:58.096003 | orchestrator | 2026-04-04 01:12:58 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-04 01:12:58.486399 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:12:58.486483 | orchestrator | eca6035e8895 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2026-04-04 01:12:58.486519 | orchestrator | 2ea72927533e registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2026-04-04 01:12:58.486529 | orchestrator | 7c58ba4ed028 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-04 01:12:58.486533 | orchestrator | 4e797258f7e9 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-04 01:12:58.486540 | orchestrator | 192f0ef40726 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2026-04-04 01:12:58.486545 | orchestrator | 4f4a14862e6e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 16 minutes ago Up 16 minutes cephclient 2026-04-04 01:12:58.486549 | orchestrator | d6b75f1c6198 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:12:58.486553 | orchestrator | 9282f55d3c08 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:12:58.486572 | orchestrator | 327c40a05a9c registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes fluentd 2026-04-04 01:12:58.486576 | orchestrator | 0d344e12155e phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 28 minutes ago Up 27 minutes (healthy) 80/tcp phpmyadmin 2026-04-04 01:12:58.486580 | orchestrator | 8883f28249ce registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 28 minutes ago Up 28 minutes openstackclient 2026-04-04 01:12:58.486584 | orchestrator | 1a6ed788e8ef registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 28 minutes ago Up 28 minutes (healthy) 8080/tcp homer 2026-04-04 01:12:58.486589 | orchestrator | 58525fe8b3a7 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-04 01:12:58.486593 | orchestrator | c700c32b1e1b registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 34 minutes (healthy) manager-inventory_reconciler-1 2026-04-04 01:12:58.486597 | orchestrator | 8bdc3ef9941a registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-ansible 2026-04-04 01:12:58.486612 | orchestrator | 44efc196741b registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) kolla-ansible 2026-04-04 01:12:58.486619 | orchestrator | e69a7ce511da registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) osism-kubernetes 2026-04-04 01:12:58.486623 | orchestrator | 23eaa4541cbf registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 35 minutes (healthy) ceph-ansible 2026-04-04 01:12:58.486627 | orchestrator | 1ab3e0259590 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 56 minutes ago Up 35 minutes (healthy) 8000/tcp manager-ara-server-1 2026-04-04 01:12:58.486631 | orchestrator | 40a216611c1d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 35 minutes (healthy) manager-beat-1 2026-04-04 01:12:58.486634 | orchestrator | d60d07243e42 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 35 minutes (healthy) manager-openstack-1 2026-04-04 01:12:58.486638 | orchestrator | cdb4bcffb1f6 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 56 minutes ago Up 35 minutes (healthy) 6379/tcp manager-redis-1 2026-04-04 01:12:58.486642 | orchestrator | 37d4458d7761 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 35 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-04 01:12:58.486650 | orchestrator | 9fd3a6dd1bf6 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 35 minutes (healthy) osismclient 2026-04-04 01:12:58.486653 | orchestrator | b0fc22425ed6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 56 minutes ago Up 35 minutes (healthy) 3306/tcp manager-mariadb-1 2026-04-04 01:12:58.486657 | orchestrator | 548e09048871 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 35 minutes (healthy) manager-flower-1 2026-04-04 01:12:58.486661 | orchestrator | 232f3a1f1270 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 35 minutes (healthy) manager-listener-1 2026-04-04 01:12:58.486665 | orchestrator | 9ff9ce6b9325 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 56 minutes ago Up 35 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-04 01:12:58.486669 | orchestrator | 41c324629dbe registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-04 01:12:58.618868 | orchestrator | 2026-04-04 01:12:58.618932 | orchestrator | ## Images @ testbed-manager 2026-04-04 01:12:58.618941 | orchestrator | 2026-04-04 01:12:58.618947 | orchestrator | + echo 2026-04-04 01:12:58.618952 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-04 01:12:58.618958 | orchestrator | + echo 2026-04-04 01:12:58.618966 | orchestrator | + osism container testbed-manager images 2026-04-04 01:13:00.027318 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:13:00.027394 | orchestrator | registry.osism.tech/osism/osism-ansible latest 0df404b6426d About an hour ago 638MB 2026-04-04 01:13:00.027405 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 72fe72af6861 About an hour ago 636MB 2026-04-04 01:13:00.027414 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 4bedf21e21b2 About an hour ago 585MB 2026-04-04 01:13:00.027422 | orchestrator | registry.osism.tech/osism/osism latest 2093f32a3ff3 About an hour ago 407MB 2026-04-04 01:13:00.027444 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 1e1660ae2c68 About an hour ago 1.24GB 2026-04-04 01:13:00.027452 | orchestrator | registry.osism.tech/osism/osism-frontend latest 0be85bab30fc About an hour ago 212MB 2026-04-04 01:13:00.027460 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 86eb43df08a0 About an hour ago 357MB 2026-04-04 01:13:00.027468 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 cd42573b3301 21 hours ago 239MB 2026-04-04 01:13:00.027476 | orchestrator | registry.osism.tech/osism/cephclient reef 6cbb9cfaba46 21 hours ago 453MB 2026-04-04 01:13:00.027484 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0d1d30da4e9f 23 hours ago 679MB 2026-04-04 01:13:00.027492 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0d7b5b093589 23 hours ago 590MB 2026-04-04 01:13:00.027522 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5f5d198c7800 23 hours ago 277MB 2026-04-04 01:13:00.027535 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 4c0c9864c746 23 hours ago 415MB 2026-04-04 01:13:00.027543 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 530b0e9e30ff 23 hours ago 368MB 2026-04-04 01:13:00.027566 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 51d3e1572312 23 hours ago 317MB 2026-04-04 01:13:00.027574 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 a5313bce85f6 23 hours ago 850MB 2026-04-04 01:13:00.027582 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 9fe89c9bb3e1 23 hours ago 319MB 2026-04-04 01:13:00.027590 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-04 01:13:00.027598 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-04 01:13:00.027606 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-04-04 01:13:00.027614 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-04-04 01:13:00.027622 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-04 01:13:00.027630 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-04 01:13:00.027665 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-04 01:13:00.155690 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:13:00.156400 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:13:00.221434 | orchestrator | 2026-04-04 01:13:00.221493 | orchestrator | ## Containers @ testbed-node-0 2026-04-04 01:13:00.221528 | orchestrator | 2026-04-04 01:13:00.221535 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:13:00.221542 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:13:00.221549 | orchestrator | + echo 2026-04-04 01:13:00.221556 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-04 01:13:00.221563 | orchestrator | + echo 2026-04-04 01:13:00.221570 | orchestrator | + osism container testbed-node-0 ps 2026-04-04 01:13:01.652967 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:13:01.653064 | orchestrator | c34503d48084 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:13:01.653075 | orchestrator | ff68bb4e9fd0 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:13:01.653083 | orchestrator | a2fb5ce1a291 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:13:01.653089 | orchestrator | a99deabe1bb7 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:13:01.653096 | orchestrator | 1a78cff71d41 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:13:01.653103 | orchestrator | d6fe04ce0c11 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-04 01:13:01.653109 | orchestrator | c0085ddf15b5 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-04 01:13:01.653135 | orchestrator | 214d48ba802c registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-04-04 01:13:01.653141 | orchestrator | eb93969808c2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-04-04 01:13:01.653169 | orchestrator | e6dbbb1f4143 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-04 01:13:01.653175 | orchestrator | e3aee1319a5f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:13:01.653181 | orchestrator | 40639e033cbc registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-04-04 01:13:01.653188 | orchestrator | 5637b6559f00 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:13:01.653194 | orchestrator | f91c196573b6 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:13:01.653200 | orchestrator | bebd43b32161 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:13:01.653206 | orchestrator | b95a4b4ffb9d registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:13:01.653212 | orchestrator | 9017d1e8ece0 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:13:01.653219 | orchestrator | 495d7358fd0e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:13:01.653225 | orchestrator | a04850d852fd registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-04 01:13:01.653231 | orchestrator | fb3917dd1f17 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-04 01:13:01.653237 | orchestrator | c553d6f3516a registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:13:01.653256 | orchestrator | 67416841bfeb registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:13:01.653263 | orchestrator | 8920a81e32f7 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:13:01.653269 | orchestrator | 5f311b3dd3e8 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:13:01.653275 | orchestrator | 0342c0934baa registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-04 01:13:01.653285 | orchestrator | 73f5b1da8c46 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:13:01.653292 | orchestrator | b8b8cccacc76 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:13:01.653298 | orchestrator | c1f32a8729fc registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-04 01:13:01.653309 | orchestrator | d2a20d46e0dc registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:13:01.653321 | orchestrator | eaca3aa9c0ee registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-04 01:13:01.653328 | orchestrator | f4e6406f7b12 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-04 01:13:01.653334 | orchestrator | 621667bb46d1 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-04-04 01:13:01.653340 | orchestrator | 5b24de482175 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-04 01:13:01.653347 | orchestrator | 95153cf383a7 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2026-04-04 01:13:01.653353 | orchestrator | 84eaf63c7f2f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:13:01.653359 | orchestrator | ff5099f82d2f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:13:01.653366 | orchestrator | 2984209195a6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:13:01.653372 | orchestrator | 190c54d802c6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2026-04-04 01:13:01.653378 | orchestrator | 4089b44d2bfa registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2026-04-04 01:13:01.653384 | orchestrator | 56bf54cf5c45 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2026-04-04 01:13:01.653391 | orchestrator | 677af4cefeb9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2026-04-04 01:13:01.653397 | orchestrator | 0459f9702629 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-0 2026-04-04 01:13:01.653403 | orchestrator | 5fc7d739e472 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:13:01.653409 | orchestrator | d50ee4a05849 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:13:01.653427 | orchestrator | 7fd1a8bf3094 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:13:01.653433 | orchestrator | 104d3c51334f registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_northd 2026-04-04 01:13:01.653439 | orchestrator | 371a95e9564f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-04 01:13:01.653445 | orchestrator | ae6b79068150 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-04 01:13:01.653456 | orchestrator | 1b8880b69c8b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-0 2026-04-04 01:13:01.653461 | orchestrator | 4c46f1cad363 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:13:01.653467 | orchestrator | f72182722ecc registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2026-04-04 01:13:01.653473 | orchestrator | 4b0f9b225fd6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:13:01.653479 | orchestrator | 252a92748e84 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-04-04 01:13:01.653485 | orchestrator | 9d522f3c822c registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-04-04 01:13:01.653495 | orchestrator | bad1dd5f8248 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-04-04 01:13:01.653611 | orchestrator | 6f9a36d2339f registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) memcached 2026-04-04 01:13:01.653618 | orchestrator | 02c56e34f9ca registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:13:01.653624 | orchestrator | b198362f290b registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:13:01.653630 | orchestrator | abfdb0bbc458 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:13:01.787876 | orchestrator | 2026-04-04 01:13:01.787971 | orchestrator | ## Images @ testbed-node-0 2026-04-04 01:13:01.787984 | orchestrator | 2026-04-04 01:13:01.787991 | orchestrator | + echo 2026-04-04 01:13:01.787998 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-04 01:13:01.788005 | orchestrator | + echo 2026-04-04 01:13:01.788012 | orchestrator | + osism container testbed-node-0 images 2026-04-04 01:13:03.261652 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:13:03.261748 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:13:03.261761 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 a23312b8f550 23 hours ago 277MB 2026-04-04 01:13:03.261765 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0d1d30da4e9f 23 hours ago 679MB 2026-04-04 01:13:03.261770 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 229bafe1995f 23 hours ago 285MB 2026-04-04 01:13:03.261774 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0d7b5b093589 23 hours ago 590MB 2026-04-04 01:13:03.261778 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 0a8852c15177 23 hours ago 1.54GB 2026-04-04 01:13:03.261783 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 c782b0699b2b 23 hours ago 1.57GB 2026-04-04 01:13:03.261788 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 509637fa535a 23 hours ago 333MB 2026-04-04 01:13:03.261795 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ecb02686b903 23 hours ago 427MB 2026-04-04 01:13:03.261801 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5f5d198c7800 23 hours ago 277MB 2026-04-04 01:13:03.261828 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 47ba33fe94a6 23 hours ago 1.04GB 2026-04-04 01:13:03.261836 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 303fa070b897 23 hours ago 287MB 2026-04-04 01:13:03.261842 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 76e9eb00e943 23 hours ago 463MB 2026-04-04 01:13:03.261848 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f129a0a8c83d 23 hours ago 303MB 2026-04-04 01:13:03.261853 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 530b0e9e30ff 23 hours ago 368MB 2026-04-04 01:13:03.261859 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 51d3e1572312 23 hours ago 317MB 2026-04-04 01:13:03.261876 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b96d4c00611 23 hours ago 309MB 2026-04-04 01:13:03.261882 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 73399a41a43c 23 hours ago 312MB 2026-04-04 01:13:03.261888 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3e338cb4e4a2 23 hours ago 1.16GB 2026-04-04 01:13:03.261894 | orchestrator | registry.osism.tech/kolla/redis 2024.2 22c0005d0282 23 hours ago 284MB 2026-04-04 01:13:03.261899 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 396e3f63f134 23 hours ago 284MB 2026-04-04 01:13:03.261905 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a37c0ef55d0d 23 hours ago 290MB 2026-04-04 01:13:03.261911 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0952a1428589 23 hours ago 290MB 2026-04-04 01:13:03.261917 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 6c9666cba760 23 hours ago 1.11GB 2026-04-04 01:13:03.261923 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9209c5a6b3d4 23 hours ago 1.42GB 2026-04-04 01:13:03.261929 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 6a15f9d70cd9 23 hours ago 1.42GB 2026-04-04 01:13:03.261934 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 f8e0047d508f 23 hours ago 1.42GB 2026-04-04 01:13:03.261939 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e60cc621ee38 23 hours ago 1.73GB 2026-04-04 01:13:03.261945 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f5d2e1ae79b2 23 hours ago 1.17GB 2026-04-04 01:13:03.261951 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 38cdb54a7b36 23 hours ago 1.22GB 2026-04-04 01:13:03.261957 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3539b0d8005c 23 hours ago 1.22GB 2026-04-04 01:13:03.261972 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9752044d9f02 23 hours ago 1.22GB 2026-04-04 01:13:03.261979 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9a514ed7a301 23 hours ago 1.38GB 2026-04-04 01:13:03.261985 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8169164af7c4 23 hours ago 1.25GB 2026-04-04 01:13:03.261991 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 302cf9ae592f 23 hours ago 1.14GB 2026-04-04 01:13:03.261997 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 306bcd31f4ae 23 hours ago 1.05GB 2026-04-04 01:13:03.262048 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 46d41a330416 23 hours ago 1.05GB 2026-04-04 01:13:03.262054 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 45e59e714839 23 hours ago 1.08GB 2026-04-04 01:13:03.262101 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 a2830d85d264 23 hours ago 987MB 2026-04-04 01:13:03.262107 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9c750efd958 23 hours ago 1GB 2026-04-04 01:13:03.262119 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 cf1f159fd77c 23 hours ago 1GB 2026-04-04 01:13:03.262123 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 b964a77561b7 23 hours ago 1GB 2026-04-04 01:13:03.262126 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 216d2964d3db 23 hours ago 987MB 2026-04-04 01:13:03.262131 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 7e388739d246 23 hours ago 987MB 2026-04-04 01:13:03.262134 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 6fc6b73ac492 23 hours ago 1GB 2026-04-04 01:13:03.262143 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 7fff68054a23 23 hours ago 1.06GB 2026-04-04 01:13:03.262146 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 05e74ba5bb03 23 hours ago 985MB 2026-04-04 01:13:03.262150 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 8ef02272b67f 23 hours ago 985MB 2026-04-04 01:13:03.262154 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 5a3394e3768e 23 hours ago 985MB 2026-04-04 01:13:03.262158 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 fe0bfab3337a 23 hours ago 984MB 2026-04-04 01:13:03.262162 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ddbd69ef7b4a 23 hours ago 1.06GB 2026-04-04 01:13:03.262165 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 64c6f4a6d289 23 hours ago 1.04GB 2026-04-04 01:13:03.262169 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 672cf4d18cc0 23 hours ago 1.04GB 2026-04-04 01:13:03.262173 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ce402f517c5f 23 hours ago 1.06GB 2026-04-04 01:13:03.262177 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 609ccf7722ae 23 hours ago 1.04GB 2026-04-04 01:13:03.262181 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b825636b3544 23 hours ago 995MB 2026-04-04 01:13:03.262184 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8dbe39234964 23 hours ago 994MB 2026-04-04 01:13:03.262188 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d6717f4211d0 23 hours ago 1e+03MB 2026-04-04 01:13:03.262192 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 acf92e0f9867 23 hours ago 1e+03MB 2026-04-04 01:13:03.262196 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b0343eeecd32 23 hours ago 995MB 2026-04-04 01:13:03.262200 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ead2374f9763 23 hours ago 995MB 2026-04-04 01:13:03.262204 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 cdfc635d1c7a 23 hours ago 851MB 2026-04-04 01:13:03.262207 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 187c78abd342 23 hours ago 851MB 2026-04-04 01:13:03.262211 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1590e26ae852 23 hours ago 851MB 2026-04-04 01:13:03.262215 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 f3daeeced35b 23 hours ago 851MB 2026-04-04 01:13:03.394389 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:13:03.394470 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:13:03.456533 | orchestrator | 2026-04-04 01:13:03.456622 | orchestrator | ## Containers @ testbed-node-1 2026-04-04 01:13:03.456633 | orchestrator | 2026-04-04 01:13:03.456640 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:13:03.456646 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:13:03.456653 | orchestrator | + echo 2026-04-04 01:13:03.456660 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-04 01:13:03.456667 | orchestrator | + echo 2026-04-04 01:13:03.456673 | orchestrator | + osism container testbed-node-1 ps 2026-04-04 01:13:04.936159 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:13:04.936265 | orchestrator | a3fcc357e495 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:13:04.936278 | orchestrator | 9efe8f49d9b7 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:13:04.936310 | orchestrator | 54e89a978a98 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:13:04.936318 | orchestrator | a156b4aea1d9 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:13:04.936326 | orchestrator | e9e98b0bd933 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:13:04.936342 | orchestrator | 80e3d6d1a5a6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-04 01:13:04.936350 | orchestrator | 1e6d891c241b registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-04 01:13:04.936357 | orchestrator | dee95404fd57 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-04-04 01:13:04.936367 | orchestrator | e61374d07140 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-04-04 01:13:04.936373 | orchestrator | ce4385acaa50 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-04 01:13:04.936380 | orchestrator | 939fec6b98de registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:13:04.936386 | orchestrator | b280c79968be registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-04-04 01:13:04.936392 | orchestrator | 7958b5903e0d registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:13:04.936399 | orchestrator | e3743c58a2b2 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:13:04.936406 | orchestrator | 9b0a8dbbe392 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:13:04.936412 | orchestrator | 05d47da314c8 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:13:04.936418 | orchestrator | dc9905e75a93 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:13:04.936425 | orchestrator | f5cc79c1565c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:13:04.936431 | orchestrator | 594babf1749b registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-04 01:13:04.936455 | orchestrator | 5620d5a889a4 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-04 01:13:04.936463 | orchestrator | c00ccc2413d9 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:13:04.936485 | orchestrator | 3c5ebef64cb8 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:13:04.936492 | orchestrator | 6b3d8a25d7ee registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:13:04.936498 | orchestrator | 2272aeb4cb50 registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:13:04.936567 | orchestrator | d9c2ab7a1462 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-04 01:13:04.936575 | orchestrator | 8babdcd74f99 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:13:04.936581 | orchestrator | cba8d972c88e registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:13:04.936593 | orchestrator | f6a57aed3858 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:13:04.936599 | orchestrator | c05a1dc682ef registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-04 01:13:04.936607 | orchestrator | 3ff987a295e2 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-04 01:13:04.936611 | orchestrator | e7757fd4734f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-04 01:13:04.936615 | orchestrator | 27f4ed32f2a5 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_mysqld_exporter 2026-04-04 01:13:04.936619 | orchestrator | 02b9c75842f2 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-04 01:13:04.936623 | orchestrator | a8438c7f4434 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2026-04-04 01:13:04.936627 | orchestrator | 9c91aab12471 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:13:04.936630 | orchestrator | d74e8a5da84e registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:13:04.936634 | orchestrator | bac531437622 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-04 01:13:04.936638 | orchestrator | aa3047192c25 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:13:04.936642 | orchestrator | c01f901f2fbc registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) opensearch_dashboards 2026-04-04 01:13:04.936653 | orchestrator | ae6a5c178824 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-04 01:13:04.936657 | orchestrator | 0ab0942d158d registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch 2026-04-04 01:13:04.936661 | orchestrator | 5ab0916aefa2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-1 2026-04-04 01:13:04.936664 | orchestrator | fb32a6f6f0fc registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:13:04.936668 | orchestrator | 48ac31d3ded9 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:13:04.936678 | orchestrator | a61745ee3423 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:13:04.936682 | orchestrator | ab03e1c91ca4 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_northd 2026-04-04 01:13:04.936686 | orchestrator | 69b2d3e39f30 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-04 01:13:04.936689 | orchestrator | 3d3517f4e053 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-04 01:13:04.936693 | orchestrator | cc2499d80bc6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-1 2026-04-04 01:13:04.936697 | orchestrator | b0e95faf6594 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:13:04.936701 | orchestrator | 632ff135e43b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2026-04-04 01:13:04.936705 | orchestrator | bdcc763c98c0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:13:04.936712 | orchestrator | 9ace6c26d7f3 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-04-04 01:13:04.936715 | orchestrator | 7e872ed50cc1 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-04-04 01:13:04.936719 | orchestrator | d3fd7377b173 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-04-04 01:13:04.936723 | orchestrator | 59e063461a36 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) memcached 2026-04-04 01:13:04.936727 | orchestrator | 4246784a6608 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:13:04.936731 | orchestrator | 070dac25933a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:13:04.936739 | orchestrator | bd7fc8c1fb9c registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:13:05.077701 | orchestrator | 2026-04-04 01:13:05.077773 | orchestrator | ## Images @ testbed-node-1 2026-04-04 01:13:05.077779 | orchestrator | 2026-04-04 01:13:05.077784 | orchestrator | + echo 2026-04-04 01:13:05.077788 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-04 01:13:05.077793 | orchestrator | + echo 2026-04-04 01:13:05.077798 | orchestrator | + osism container testbed-node-1 images 2026-04-04 01:13:06.475174 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:13:06.475261 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:13:06.475272 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 a23312b8f550 23 hours ago 277MB 2026-04-04 01:13:06.475281 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0d1d30da4e9f 23 hours ago 679MB 2026-04-04 01:13:06.475289 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 229bafe1995f 23 hours ago 285MB 2026-04-04 01:13:06.475298 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0d7b5b093589 23 hours ago 590MB 2026-04-04 01:13:06.475306 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 0a8852c15177 23 hours ago 1.54GB 2026-04-04 01:13:06.475314 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 c782b0699b2b 23 hours ago 1.57GB 2026-04-04 01:13:06.475322 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 509637fa535a 23 hours ago 333MB 2026-04-04 01:13:06.475329 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ecb02686b903 23 hours ago 427MB 2026-04-04 01:13:06.475337 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5f5d198c7800 23 hours ago 277MB 2026-04-04 01:13:06.475345 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 47ba33fe94a6 23 hours ago 1.04GB 2026-04-04 01:13:06.475353 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 303fa070b897 23 hours ago 287MB 2026-04-04 01:13:06.475361 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 76e9eb00e943 23 hours ago 463MB 2026-04-04 01:13:06.475369 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f129a0a8c83d 23 hours ago 303MB 2026-04-04 01:13:06.475377 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 530b0e9e30ff 23 hours ago 368MB 2026-04-04 01:13:06.475385 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 51d3e1572312 23 hours ago 317MB 2026-04-04 01:13:06.475393 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b96d4c00611 23 hours ago 309MB 2026-04-04 01:13:06.475401 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 73399a41a43c 23 hours ago 312MB 2026-04-04 01:13:06.475409 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3e338cb4e4a2 23 hours ago 1.16GB 2026-04-04 01:13:06.475417 | orchestrator | registry.osism.tech/kolla/redis 2024.2 22c0005d0282 23 hours ago 284MB 2026-04-04 01:13:06.475425 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 396e3f63f134 23 hours ago 284MB 2026-04-04 01:13:06.475433 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a37c0ef55d0d 23 hours ago 290MB 2026-04-04 01:13:06.475441 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0952a1428589 23 hours ago 290MB 2026-04-04 01:13:06.475449 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 6c9666cba760 23 hours ago 1.11GB 2026-04-04 01:13:06.475456 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9209c5a6b3d4 23 hours ago 1.42GB 2026-04-04 01:13:06.475568 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 6a15f9d70cd9 23 hours ago 1.42GB 2026-04-04 01:13:06.475579 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 f8e0047d508f 23 hours ago 1.42GB 2026-04-04 01:13:06.475587 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e60cc621ee38 23 hours ago 1.73GB 2026-04-04 01:13:06.475595 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f5d2e1ae79b2 23 hours ago 1.17GB 2026-04-04 01:13:06.475603 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 38cdb54a7b36 23 hours ago 1.22GB 2026-04-04 01:13:06.475612 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3539b0d8005c 23 hours ago 1.22GB 2026-04-04 01:13:06.475620 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9752044d9f02 23 hours ago 1.22GB 2026-04-04 01:13:06.475628 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9a514ed7a301 23 hours ago 1.38GB 2026-04-04 01:13:06.475653 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8169164af7c4 23 hours ago 1.25GB 2026-04-04 01:13:06.475661 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 302cf9ae592f 23 hours ago 1.14GB 2026-04-04 01:13:06.475669 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 306bcd31f4ae 23 hours ago 1.05GB 2026-04-04 01:13:06.475693 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 46d41a330416 23 hours ago 1.05GB 2026-04-04 01:13:06.475702 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 45e59e714839 23 hours ago 1.08GB 2026-04-04 01:13:06.475710 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 a2830d85d264 23 hours ago 987MB 2026-04-04 01:13:06.475720 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9c750efd958 23 hours ago 1GB 2026-04-04 01:13:06.475730 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 cf1f159fd77c 23 hours ago 1GB 2026-04-04 01:13:06.475740 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 b964a77561b7 23 hours ago 1GB 2026-04-04 01:13:06.475749 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ddbd69ef7b4a 23 hours ago 1.06GB 2026-04-04 01:13:06.475758 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 64c6f4a6d289 23 hours ago 1.04GB 2026-04-04 01:13:06.475767 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 672cf4d18cc0 23 hours ago 1.04GB 2026-04-04 01:13:06.475777 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ce402f517c5f 23 hours ago 1.06GB 2026-04-04 01:13:06.475787 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 609ccf7722ae 23 hours ago 1.04GB 2026-04-04 01:13:06.475797 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b825636b3544 23 hours ago 995MB 2026-04-04 01:13:06.475807 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8dbe39234964 23 hours ago 994MB 2026-04-04 01:13:06.475816 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d6717f4211d0 23 hours ago 1e+03MB 2026-04-04 01:13:06.475826 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 acf92e0f9867 23 hours ago 1e+03MB 2026-04-04 01:13:06.475835 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b0343eeecd32 23 hours ago 995MB 2026-04-04 01:13:06.475844 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ead2374f9763 23 hours ago 995MB 2026-04-04 01:13:06.475854 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 cdfc635d1c7a 23 hours ago 851MB 2026-04-04 01:13:06.475864 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 187c78abd342 23 hours ago 851MB 2026-04-04 01:13:06.475881 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1590e26ae852 23 hours ago 851MB 2026-04-04 01:13:06.475890 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 f3daeeced35b 23 hours ago 851MB 2026-04-04 01:13:06.608588 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-04 01:13:06.611148 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:13:06.655127 | orchestrator | 2026-04-04 01:13:06.655213 | orchestrator | ## Containers @ testbed-node-2 2026-04-04 01:13:06.655225 | orchestrator | 2026-04-04 01:13:06.655231 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:13:06.655238 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:13:06.655243 | orchestrator | + echo 2026-04-04 01:13:06.655250 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-04 01:13:06.655257 | orchestrator | + echo 2026-04-04 01:13:06.655263 | orchestrator | + osism container testbed-node-2 ps 2026-04-04 01:13:08.061902 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-04 01:13:08.062001 | orchestrator | 9df38415f8f3 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2026-04-04 01:13:08.062050 | orchestrator | 30c10d6a83d2 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2026-04-04 01:13:08.062059 | orchestrator | 76dce2b466ad registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2026-04-04 01:13:08.062065 | orchestrator | 17f4722c4d49 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2026-04-04 01:13:08.062072 | orchestrator | 27be07443f2c registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2026-04-04 01:13:08.062079 | orchestrator | c9aa0163dc7d registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-04 01:13:08.062085 | orchestrator | 471ebc0809ad registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2026-04-04 01:13:08.062092 | orchestrator | fd8aebff1612 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2026-04-04 01:13:08.062099 | orchestrator | bc84a2882d5e registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) placement_api 2026-04-04 01:13:08.062105 | orchestrator | 64e011af221b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2026-04-04 01:13:08.062111 | orchestrator | 35db193be0f3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 8 minutes (healthy) nova_conductor 2026-04-04 01:13:08.062118 | orchestrator | ef23f15f5777 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2026-04-04 01:13:08.062124 | orchestrator | 4b525ec5fa8b registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2026-04-04 01:13:08.062130 | orchestrator | 106db05f893a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2026-04-04 01:13:08.062155 | orchestrator | 406cf1a706d9 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2026-04-04 01:13:08.062191 | orchestrator | be2575c945ba registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2026-04-04 01:13:08.062198 | orchestrator | 75e63b3b37c5 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2026-04-04 01:13:08.062204 | orchestrator | 31578851924a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2026-04-04 01:13:08.062210 | orchestrator | 869e616bbb34 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_api 2026-04-04 01:13:08.062216 | orchestrator | 27f701ba948e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 8 minutes (healthy) nova_scheduler 2026-04-04 01:13:08.062222 | orchestrator | dcfdb969b2fa registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2026-04-04 01:13:08.062246 | orchestrator | 8e65b5a57443 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2026-04-04 01:13:08.062252 | orchestrator | ae75697f0e3f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2026-04-04 01:13:08.062258 | orchestrator | 8135e733362e registry.osism.tech/kolla/cinder-backup:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_backup 2026-04-04 01:13:08.062263 | orchestrator | 6e7a6bcfd4b1 registry.osism.tech/kolla/cinder-volume:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_volume 2026-04-04 01:13:08.062269 | orchestrator | c716e70bf38e registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2026-04-04 01:13:08.062275 | orchestrator | 3e9fea71f971 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2026-04-04 01:13:08.062281 | orchestrator | 1c50c2b4e740 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2026-04-04 01:13:08.062287 | orchestrator | 3014e30158c0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2026-04-04 01:13:08.062294 | orchestrator | 2b9adc3f3023 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_cadvisor 2026-04-04 01:13:08.062299 | orchestrator | ee17b84d99c0 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2026-04-04 01:13:08.062306 | orchestrator | 3427a84fd347 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2026-04-04 01:13:08.062312 | orchestrator | e8feb92df67d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2026-04-04 01:13:08.062318 | orchestrator | 8e74c6715b0b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2026-04-04 01:13:08.062331 | orchestrator | 8f05eeebbeaf registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2026-04-04 01:13:08.062337 | orchestrator | 17a1ab425084 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2026-04-04 01:13:08.062344 | orchestrator | 67cfdcd6b31b registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2026-04-04 01:13:08.062350 | orchestrator | 83c856348e6d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_ssh 2026-04-04 01:13:08.062357 | orchestrator | 623b906d9144 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) opensearch_dashboards 2026-04-04 01:13:08.062362 | orchestrator | 9463a92288ee registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2026-04-04 01:13:08.062368 | orchestrator | 5e8d4152fa0c registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 19 minutes (healthy) opensearch 2026-04-04 01:13:08.062374 | orchestrator | bccf31479128 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 21 minutes ago Up 21 minutes ceph-crash-testbed-node-2 2026-04-04 01:13:08.062379 | orchestrator | e2af1472c29a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2026-04-04 01:13:08.062385 | orchestrator | af0d71b371b4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) proxysql 2026-04-04 01:13:08.062397 | orchestrator | cf3530e4523a registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) haproxy 2026-04-04 01:13:08.062404 | orchestrator | dd7b23230e2a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes ovn_northd 2026-04-04 01:13:08.062409 | orchestrator | d490ecef7a16 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_sb_db 2026-04-04 01:13:08.062415 | orchestrator | 795c93061aba registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes ovn_nb_db 2026-04-04 01:13:08.062421 | orchestrator | 105648c31913 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) rabbitmq 2026-04-04 01:13:08.062433 | orchestrator | adbaa8de2c30 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 25 minutes ago Up 25 minutes ceph-mon-testbed-node-2 2026-04-04 01:13:08.062439 | orchestrator | 49bb79276642 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_controller 2026-04-04 01:13:08.062445 | orchestrator | cb628e684bf9 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_vswitchd 2026-04-04 01:13:08.062451 | orchestrator | 7c3146c9704a registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) openvswitch_db 2026-04-04 01:13:08.062456 | orchestrator | 152a79ddbaf5 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis_sentinel 2026-04-04 01:13:08.062470 | orchestrator | de0ae9567a7b registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) redis 2026-04-04 01:13:08.062477 | orchestrator | 943cfdee19db registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2026-04-04 01:13:08.062483 | orchestrator | f4949471eaf9 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes cron 2026-04-04 01:13:08.062489 | orchestrator | ff795c512673 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes kolla_toolbox 2026-04-04 01:13:08.062495 | orchestrator | a23da0a5f5c5 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes fluentd 2026-04-04 01:13:08.201718 | orchestrator | 2026-04-04 01:13:08.201803 | orchestrator | ## Images @ testbed-node-2 2026-04-04 01:13:08.201815 | orchestrator | 2026-04-04 01:13:08.201822 | orchestrator | + echo 2026-04-04 01:13:08.201829 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-04 01:13:08.201836 | orchestrator | + echo 2026-04-04 01:13:08.201842 | orchestrator | + osism container testbed-node-2 images 2026-04-04 01:13:09.659689 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-04 01:13:09.659762 | orchestrator | registry.osism.tech/osism/ceph-daemon reef f46b7418fb77 21 hours ago 1.35GB 2026-04-04 01:13:09.659767 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 a23312b8f550 23 hours ago 277MB 2026-04-04 01:13:09.659785 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 0d1d30da4e9f 23 hours ago 679MB 2026-04-04 01:13:09.659789 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 229bafe1995f 23 hours ago 285MB 2026-04-04 01:13:09.659793 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 0d7b5b093589 23 hours ago 590MB 2026-04-04 01:13:09.659797 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 c782b0699b2b 23 hours ago 1.57GB 2026-04-04 01:13:09.659801 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 0a8852c15177 23 hours ago 1.54GB 2026-04-04 01:13:09.659805 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 509637fa535a 23 hours ago 333MB 2026-04-04 01:13:09.659809 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 ecb02686b903 23 hours ago 427MB 2026-04-04 01:13:09.659812 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 47ba33fe94a6 23 hours ago 1.04GB 2026-04-04 01:13:09.659816 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5f5d198c7800 23 hours ago 277MB 2026-04-04 01:13:09.659820 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 303fa070b897 23 hours ago 287MB 2026-04-04 01:13:09.659824 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 76e9eb00e943 23 hours ago 463MB 2026-04-04 01:13:09.659827 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 f129a0a8c83d 23 hours ago 303MB 2026-04-04 01:13:09.659831 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 530b0e9e30ff 23 hours ago 368MB 2026-04-04 01:13:09.659835 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 51d3e1572312 23 hours ago 317MB 2026-04-04 01:13:09.659838 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 7b96d4c00611 23 hours ago 309MB 2026-04-04 01:13:09.659842 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 73399a41a43c 23 hours ago 312MB 2026-04-04 01:13:09.659846 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 3e338cb4e4a2 23 hours ago 1.16GB 2026-04-04 01:13:09.659866 | orchestrator | registry.osism.tech/kolla/redis 2024.2 22c0005d0282 23 hours ago 284MB 2026-04-04 01:13:09.659871 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 396e3f63f134 23 hours ago 284MB 2026-04-04 01:13:09.659874 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 0952a1428589 23 hours ago 290MB 2026-04-04 01:13:09.659878 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a37c0ef55d0d 23 hours ago 290MB 2026-04-04 01:13:09.659882 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 6c9666cba760 23 hours ago 1.11GB 2026-04-04 01:13:09.659886 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9209c5a6b3d4 23 hours ago 1.42GB 2026-04-04 01:13:09.659889 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 6a15f9d70cd9 23 hours ago 1.42GB 2026-04-04 01:13:09.659893 | orchestrator | registry.osism.tech/kolla/cinder-backup 2024.2 f8e0047d508f 23 hours ago 1.42GB 2026-04-04 01:13:09.659897 | orchestrator | registry.osism.tech/kolla/cinder-volume 2024.2 e60cc621ee38 23 hours ago 1.73GB 2026-04-04 01:13:09.659901 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f5d2e1ae79b2 23 hours ago 1.17GB 2026-04-04 01:13:09.659904 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 38cdb54a7b36 23 hours ago 1.22GB 2026-04-04 01:13:09.659908 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3539b0d8005c 23 hours ago 1.22GB 2026-04-04 01:13:09.659912 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 9752044d9f02 23 hours ago 1.22GB 2026-04-04 01:13:09.659916 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 9a514ed7a301 23 hours ago 1.38GB 2026-04-04 01:13:09.659920 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8169164af7c4 23 hours ago 1.25GB 2026-04-04 01:13:09.659924 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 302cf9ae592f 23 hours ago 1.14GB 2026-04-04 01:13:09.659927 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 306bcd31f4ae 23 hours ago 1.05GB 2026-04-04 01:13:09.659942 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 46d41a330416 23 hours ago 1.05GB 2026-04-04 01:13:09.659946 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 45e59e714839 23 hours ago 1.08GB 2026-04-04 01:13:09.659949 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 a2830d85d264 23 hours ago 987MB 2026-04-04 01:13:09.659953 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c9c750efd958 23 hours ago 1GB 2026-04-04 01:13:09.659957 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 cf1f159fd77c 23 hours ago 1GB 2026-04-04 01:13:09.659961 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 b964a77561b7 23 hours ago 1GB 2026-04-04 01:13:09.659965 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 ddbd69ef7b4a 23 hours ago 1.06GB 2026-04-04 01:13:09.659969 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 64c6f4a6d289 23 hours ago 1.04GB 2026-04-04 01:13:09.659973 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 672cf4d18cc0 23 hours ago 1.04GB 2026-04-04 01:13:09.659977 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 ce402f517c5f 23 hours ago 1.06GB 2026-04-04 01:13:09.659981 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 609ccf7722ae 23 hours ago 1.04GB 2026-04-04 01:13:09.659984 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 b825636b3544 23 hours ago 995MB 2026-04-04 01:13:09.659993 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 8dbe39234964 23 hours ago 994MB 2026-04-04 01:13:09.660000 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 d6717f4211d0 23 hours ago 1e+03MB 2026-04-04 01:13:09.660004 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 acf92e0f9867 23 hours ago 1e+03MB 2026-04-04 01:13:09.660008 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 b0343eeecd32 23 hours ago 995MB 2026-04-04 01:13:09.660012 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ead2374f9763 23 hours ago 995MB 2026-04-04 01:13:09.660015 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 cdfc635d1c7a 23 hours ago 851MB 2026-04-04 01:13:09.660019 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 187c78abd342 23 hours ago 851MB 2026-04-04 01:13:09.660023 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1590e26ae852 23 hours ago 851MB 2026-04-04 01:13:09.660027 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 f3daeeced35b 23 hours ago 851MB 2026-04-04 01:13:09.794785 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-04 01:13:09.804045 | orchestrator | + set -e 2026-04-04 01:13:09.804116 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:13:09.804987 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:13:09.805009 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:13:09.805014 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:13:09.805019 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:13:09.805023 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:13:09.805035 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:13:09.805039 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:13:09.805043 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:13:09.805048 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 01:13:09.805052 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 01:13:09.805056 | orchestrator | ++ export ARA=false 2026-04-04 01:13:09.805060 | orchestrator | ++ ARA=false 2026-04-04 01:13:09.805064 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:13:09.805068 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:13:09.805072 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:13:09.805076 | orchestrator | ++ TEMPEST=true 2026-04-04 01:13:09.805080 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:13:09.805084 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:13:09.805088 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:13:09.805092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:13:09.805096 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:13:09.805100 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:13:09.805103 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:13:09.805107 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:13:09.805111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:13:09.805115 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:13:09.805283 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:13:09.805348 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:13:09.805360 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-04 01:13:09.805371 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-04 01:13:09.815864 | orchestrator | + set -e 2026-04-04 01:13:09.815948 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:13:09.815962 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:13:09.815973 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:13:09.815982 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:13:09.815991 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:13:09.816000 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:13:09.817203 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:13:09.824257 | orchestrator | 2026-04-04 01:13:09.824352 | orchestrator | # Ceph status 2026-04-04 01:13:09.824371 | orchestrator | 2026-04-04 01:13:09.824387 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:13:09.824399 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:13:09.824409 | orchestrator | + echo 2026-04-04 01:13:09.824418 | orchestrator | + echo '# Ceph status' 2026-04-04 01:13:09.824428 | orchestrator | + echo 2026-04-04 01:13:09.824436 | orchestrator | + ceph -s 2026-04-04 01:13:10.381098 | orchestrator | cluster: 2026-04-04 01:13:10.381193 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-04 01:13:10.381201 | orchestrator | health: HEALTH_OK 2026-04-04 01:13:10.381206 | orchestrator | 2026-04-04 01:13:10.381210 | orchestrator | services: 2026-04-04 01:13:10.381214 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 25m) 2026-04-04 01:13:10.381220 | orchestrator | mgr: testbed-node-1(active, since 15m), standbys: testbed-node-2, testbed-node-0 2026-04-04 01:13:10.381225 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-04 01:13:10.381229 | orchestrator | osd: 6 osds: 6 up (since 22m), 6 in (since 22m) 2026-04-04 01:13:10.381234 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-04 01:13:10.381238 | orchestrator | 2026-04-04 01:13:10.381242 | orchestrator | data: 2026-04-04 01:13:10.381246 | orchestrator | volumes: 1/1 healthy 2026-04-04 01:13:10.381250 | orchestrator | pools: 14 pools, 401 pgs 2026-04-04 01:13:10.381254 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-04 01:13:10.381257 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-04 01:13:10.381261 | orchestrator | pgs: 401 active+clean 2026-04-04 01:13:10.381265 | orchestrator | 2026-04-04 01:13:10.424555 | orchestrator | 2026-04-04 01:13:10.424633 | orchestrator | # Ceph versions 2026-04-04 01:13:10.424642 | orchestrator | 2026-04-04 01:13:10.424650 | orchestrator | + echo 2026-04-04 01:13:10.424657 | orchestrator | + echo '# Ceph versions' 2026-04-04 01:13:10.424665 | orchestrator | + echo 2026-04-04 01:13:10.424671 | orchestrator | + ceph versions 2026-04-04 01:13:10.978239 | orchestrator | { 2026-04-04 01:13:10.978318 | orchestrator | "mon": { 2026-04-04 01:13:10.978330 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:13:10.978339 | orchestrator | }, 2026-04-04 01:13:10.978346 | orchestrator | "mgr": { 2026-04-04 01:13:10.978370 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:13:10.978377 | orchestrator | }, 2026-04-04 01:13:10.978383 | orchestrator | "osd": { 2026-04-04 01:13:10.978390 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 6 2026-04-04 01:13:10.978396 | orchestrator | }, 2026-04-04 01:13:10.978402 | orchestrator | "mds": { 2026-04-04 01:13:10.978409 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:13:10.978415 | orchestrator | }, 2026-04-04 01:13:10.978421 | orchestrator | "rgw": { 2026-04-04 01:13:10.978428 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 3 2026-04-04 01:13:10.978434 | orchestrator | }, 2026-04-04 01:13:10.978440 | orchestrator | "overall": { 2026-04-04 01:13:10.978447 | orchestrator | "ceph version 18.2.8 (efac5a54607c13fa50d4822e50242b86e6e446df) reef (stable)": 18 2026-04-04 01:13:10.978453 | orchestrator | } 2026-04-04 01:13:10.978458 | orchestrator | } 2026-04-04 01:13:11.022744 | orchestrator | 2026-04-04 01:13:11.022827 | orchestrator | # Ceph OSD tree 2026-04-04 01:13:11.022837 | orchestrator | 2026-04-04 01:13:11.022844 | orchestrator | + echo 2026-04-04 01:13:11.022850 | orchestrator | + echo '# Ceph OSD tree' 2026-04-04 01:13:11.022858 | orchestrator | + echo 2026-04-04 01:13:11.022864 | orchestrator | + ceph osd df tree 2026-04-04 01:13:11.537728 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-04 01:13:11.537807 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 417 MiB 113 GiB 5.91 1.00 - root default 2026-04-04 01:13:11.537814 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-3 2026-04-04 01:13:11.537819 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 65 MiB 19 GiB 6.43 1.09 209 up osd.1 2026-04-04 01:13:11.537824 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.38 0.91 181 up osd.5 2026-04-04 01:13:11.537828 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2026-04-04 01:13:11.537832 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 912 MiB 843 MiB 1 KiB 70 MiB 19 GiB 4.46 0.75 174 up osd.0 2026-04-04 01:13:11.537836 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.35 1.25 218 up osd.3 2026-04-04 01:13:11.537857 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-04-04 01:13:11.537861 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.65 1.13 198 up osd.2 2026-04-04 01:13:11.537865 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 987 MiB 1 KiB 70 MiB 19 GiB 5.16 0.87 190 up osd.4 2026-04-04 01:13:11.537869 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 417 MiB 113 GiB 5.91 2026-04-04 01:13:11.537873 | orchestrator | MIN/MAX VAR: 0.75/1.25 STDDEV: 0.99 2026-04-04 01:13:11.589274 | orchestrator | 2026-04-04 01:13:11.589344 | orchestrator | # Ceph monitor status 2026-04-04 01:13:11.589354 | orchestrator | 2026-04-04 01:13:11.589360 | orchestrator | + echo 2026-04-04 01:13:11.589366 | orchestrator | + echo '# Ceph monitor status' 2026-04-04 01:13:11.589372 | orchestrator | + echo 2026-04-04 01:13:11.589378 | orchestrator | + ceph mon stat 2026-04-04 01:13:12.174752 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-04 01:13:12.228293 | orchestrator | 2026-04-04 01:13:12.228354 | orchestrator | # Ceph quorum status 2026-04-04 01:13:12.228361 | orchestrator | 2026-04-04 01:13:12.228366 | orchestrator | + echo 2026-04-04 01:13:12.228370 | orchestrator | + echo '# Ceph quorum status' 2026-04-04 01:13:12.228374 | orchestrator | + echo 2026-04-04 01:13:12.228984 | orchestrator | + ceph quorum_status 2026-04-04 01:13:12.228996 | orchestrator | + jq 2026-04-04 01:13:12.811089 | orchestrator | { 2026-04-04 01:13:12.811161 | orchestrator | "election_epoch": 8, 2026-04-04 01:13:12.811168 | orchestrator | "quorum": [ 2026-04-04 01:13:12.811173 | orchestrator | 0, 2026-04-04 01:13:12.811178 | orchestrator | 1, 2026-04-04 01:13:12.811182 | orchestrator | 2 2026-04-04 01:13:12.811186 | orchestrator | ], 2026-04-04 01:13:12.811190 | orchestrator | "quorum_names": [ 2026-04-04 01:13:12.811194 | orchestrator | "testbed-node-0", 2026-04-04 01:13:12.811199 | orchestrator | "testbed-node-1", 2026-04-04 01:13:12.811203 | orchestrator | "testbed-node-2" 2026-04-04 01:13:12.811207 | orchestrator | ], 2026-04-04 01:13:12.811211 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-04 01:13:12.811216 | orchestrator | "quorum_age": 1506, 2026-04-04 01:13:12.811220 | orchestrator | "features": { 2026-04-04 01:13:12.811224 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-04 01:13:12.811228 | orchestrator | "quorum_mon": [ 2026-04-04 01:13:12.811232 | orchestrator | "kraken", 2026-04-04 01:13:12.811236 | orchestrator | "luminous", 2026-04-04 01:13:12.811240 | orchestrator | "mimic", 2026-04-04 01:13:12.811244 | orchestrator | "osdmap-prune", 2026-04-04 01:13:12.811248 | orchestrator | "nautilus", 2026-04-04 01:13:12.811252 | orchestrator | "octopus", 2026-04-04 01:13:12.811256 | orchestrator | "pacific", 2026-04-04 01:13:12.811260 | orchestrator | "elector-pinging", 2026-04-04 01:13:12.811264 | orchestrator | "quincy", 2026-04-04 01:13:12.811268 | orchestrator | "reef" 2026-04-04 01:13:12.811272 | orchestrator | ] 2026-04-04 01:13:12.811276 | orchestrator | }, 2026-04-04 01:13:12.811280 | orchestrator | "monmap": { 2026-04-04 01:13:12.811284 | orchestrator | "epoch": 1, 2026-04-04 01:13:12.811288 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-04 01:13:12.811292 | orchestrator | "modified": "2026-04-04T00:47:48.800588Z", 2026-04-04 01:13:12.811296 | orchestrator | "created": "2026-04-04T00:47:48.800588Z", 2026-04-04 01:13:12.811300 | orchestrator | "min_mon_release": 18, 2026-04-04 01:13:12.811304 | orchestrator | "min_mon_release_name": "reef", 2026-04-04 01:13:12.811308 | orchestrator | "election_strategy": 1, 2026-04-04 01:13:12.811312 | orchestrator | "disallowed_leaders": "", 2026-04-04 01:13:12.811316 | orchestrator | "stretch_mode": false, 2026-04-04 01:13:12.811320 | orchestrator | "tiebreaker_mon": "", 2026-04-04 01:13:12.811323 | orchestrator | "removed_ranks": "", 2026-04-04 01:13:12.811327 | orchestrator | "features": { 2026-04-04 01:13:12.811341 | orchestrator | "persistent": [ 2026-04-04 01:13:12.811345 | orchestrator | "kraken", 2026-04-04 01:13:12.811354 | orchestrator | "luminous", 2026-04-04 01:13:12.811358 | orchestrator | "mimic", 2026-04-04 01:13:12.811362 | orchestrator | "osdmap-prune", 2026-04-04 01:13:12.811382 | orchestrator | "nautilus", 2026-04-04 01:13:12.811386 | orchestrator | "octopus", 2026-04-04 01:13:12.811390 | orchestrator | "pacific", 2026-04-04 01:13:12.811394 | orchestrator | "elector-pinging", 2026-04-04 01:13:12.811397 | orchestrator | "quincy", 2026-04-04 01:13:12.811401 | orchestrator | "reef" 2026-04-04 01:13:12.811405 | orchestrator | ], 2026-04-04 01:13:12.811408 | orchestrator | "optional": [] 2026-04-04 01:13:12.811412 | orchestrator | }, 2026-04-04 01:13:12.811416 | orchestrator | "mons": [ 2026-04-04 01:13:12.811489 | orchestrator | { 2026-04-04 01:13:12.811496 | orchestrator | "rank": 0, 2026-04-04 01:13:12.811522 | orchestrator | "name": "testbed-node-0", 2026-04-04 01:13:12.811530 | orchestrator | "public_addrs": { 2026-04-04 01:13:12.811536 | orchestrator | "addrvec": [ 2026-04-04 01:13:12.811542 | orchestrator | { 2026-04-04 01:13:12.811566 | orchestrator | "type": "v2", 2026-04-04 01:13:12.811576 | orchestrator | "addr": "192.168.16.10:3300", 2026-04-04 01:13:12.811582 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811587 | orchestrator | }, 2026-04-04 01:13:12.811593 | orchestrator | { 2026-04-04 01:13:12.811599 | orchestrator | "type": "v1", 2026-04-04 01:13:12.811605 | orchestrator | "addr": "192.168.16.10:6789", 2026-04-04 01:13:12.811611 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811616 | orchestrator | } 2026-04-04 01:13:12.811621 | orchestrator | ] 2026-04-04 01:13:12.811630 | orchestrator | }, 2026-04-04 01:13:12.811638 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-04-04 01:13:12.811644 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-04-04 01:13:12.811649 | orchestrator | "priority": 0, 2026-04-04 01:13:12.811655 | orchestrator | "weight": 0, 2026-04-04 01:13:12.811660 | orchestrator | "crush_location": "{}" 2026-04-04 01:13:12.811666 | orchestrator | }, 2026-04-04 01:13:12.811672 | orchestrator | { 2026-04-04 01:13:12.811678 | orchestrator | "rank": 1, 2026-04-04 01:13:12.811684 | orchestrator | "name": "testbed-node-1", 2026-04-04 01:13:12.811690 | orchestrator | "public_addrs": { 2026-04-04 01:13:12.811697 | orchestrator | "addrvec": [ 2026-04-04 01:13:12.811703 | orchestrator | { 2026-04-04 01:13:12.811710 | orchestrator | "type": "v2", 2026-04-04 01:13:12.811716 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-04 01:13:12.811722 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811728 | orchestrator | }, 2026-04-04 01:13:12.811736 | orchestrator | { 2026-04-04 01:13:12.811741 | orchestrator | "type": "v1", 2026-04-04 01:13:12.811746 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-04 01:13:12.811750 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811755 | orchestrator | } 2026-04-04 01:13:12.811759 | orchestrator | ] 2026-04-04 01:13:12.811764 | orchestrator | }, 2026-04-04 01:13:12.811781 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-04 01:13:12.811786 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-04 01:13:12.811790 | orchestrator | "priority": 0, 2026-04-04 01:13:12.811795 | orchestrator | "weight": 0, 2026-04-04 01:13:12.811799 | orchestrator | "crush_location": "{}" 2026-04-04 01:13:12.811804 | orchestrator | }, 2026-04-04 01:13:12.811808 | orchestrator | { 2026-04-04 01:13:12.811812 | orchestrator | "rank": 2, 2026-04-04 01:13:12.811818 | orchestrator | "name": "testbed-node-2", 2026-04-04 01:13:12.811824 | orchestrator | "public_addrs": { 2026-04-04 01:13:12.811830 | orchestrator | "addrvec": [ 2026-04-04 01:13:12.811836 | orchestrator | { 2026-04-04 01:13:12.811842 | orchestrator | "type": "v2", 2026-04-04 01:13:12.811848 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-04 01:13:12.811854 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811860 | orchestrator | }, 2026-04-04 01:13:12.811867 | orchestrator | { 2026-04-04 01:13:12.811873 | orchestrator | "type": "v1", 2026-04-04 01:13:12.811880 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-04 01:13:12.811886 | orchestrator | "nonce": 0 2026-04-04 01:13:12.811893 | orchestrator | } 2026-04-04 01:13:12.811901 | orchestrator | ] 2026-04-04 01:13:12.811905 | orchestrator | }, 2026-04-04 01:13:12.811910 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-04 01:13:12.811914 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-04 01:13:12.811919 | orchestrator | "priority": 0, 2026-04-04 01:13:12.811924 | orchestrator | "weight": 0, 2026-04-04 01:13:12.811928 | orchestrator | "crush_location": "{}" 2026-04-04 01:13:12.811940 | orchestrator | } 2026-04-04 01:13:12.811945 | orchestrator | ] 2026-04-04 01:13:12.811949 | orchestrator | } 2026-04-04 01:13:12.811954 | orchestrator | } 2026-04-04 01:13:12.812060 | orchestrator | 2026-04-04 01:13:12.812066 | orchestrator | # Ceph free space status 2026-04-04 01:13:12.812070 | orchestrator | 2026-04-04 01:13:12.812074 | orchestrator | + echo 2026-04-04 01:13:12.812078 | orchestrator | + echo '# Ceph free space status' 2026-04-04 01:13:12.812082 | orchestrator | + echo 2026-04-04 01:13:12.812086 | orchestrator | + ceph df 2026-04-04 01:13:13.401142 | orchestrator | --- RAW STORAGE --- 2026-04-04 01:13:13.401229 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-04 01:13:13.401250 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-04 01:13:13.401257 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2026-04-04 01:13:13.401263 | orchestrator | 2026-04-04 01:13:13.401270 | orchestrator | --- POOLS --- 2026-04-04 01:13:13.401277 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-04 01:13:13.401285 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-04 01:13:13.401290 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:13:13.401297 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-04 01:13:13.401303 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:13:13.401309 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:13:13.401316 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-04 01:13:13.401323 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-04 01:13:13.401329 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-04 01:13:13.401336 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-04 01:13:13.401342 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:13:13.401349 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:13:13.401355 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2026-04-04 01:13:13.401361 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:13:13.401367 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-04 01:13:13.446188 | orchestrator | ++ semver latest 5.0.0 2026-04-04 01:13:13.508106 | orchestrator | + [[ -1 -eq -1 ]] 2026-04-04 01:13:13.508194 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-04-04 01:13:13.508205 | orchestrator | + osism apply facts 2026-04-04 01:13:24.849959 | orchestrator | 2026-04-04 01:13:24 | INFO  | Prepare task for execution of facts. 2026-04-04 01:13:24.925409 | orchestrator | 2026-04-04 01:13:24 | INFO  | Task 78abea1f-a3d0-43e3-ba11-2891f3379142 (facts) was prepared for execution. 2026-04-04 01:13:24.925461 | orchestrator | 2026-04-04 01:13:24 | INFO  | It takes a moment until task 78abea1f-a3d0-43e3-ba11-2891f3379142 (facts) has been started and output is visible here. 2026-04-04 01:13:37.516996 | orchestrator | 2026-04-04 01:13:37.517062 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-04 01:13:37.517071 | orchestrator | 2026-04-04 01:13:37.517077 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-04 01:13:37.517082 | orchestrator | Saturday 04 April 2026 01:13:28 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-04-04 01:13:37.517111 | orchestrator | ok: [testbed-manager] 2026-04-04 01:13:37.517118 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:13:37.517123 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:13:37.517128 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:13:37.517134 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:13:37.517139 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:13:37.517144 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:13:37.517149 | orchestrator | 2026-04-04 01:13:37.517154 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-04 01:13:37.517173 | orchestrator | Saturday 04 April 2026 01:13:29 +0000 (0:00:01.355) 0:00:01.694 ******** 2026-04-04 01:13:37.517179 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:13:37.517191 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:13:37.517197 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:13:37.517202 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:13:37.517207 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:13:37.517212 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:13:37.517217 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:13:37.517221 | orchestrator | 2026-04-04 01:13:37.517227 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-04 01:13:37.517232 | orchestrator | 2026-04-04 01:13:37.517237 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-04 01:13:37.517242 | orchestrator | Saturday 04 April 2026 01:13:30 +0000 (0:00:01.213) 0:00:02.908 ******** 2026-04-04 01:13:37.517247 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:13:37.517252 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:13:37.517257 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:13:37.517262 | orchestrator | ok: [testbed-manager] 2026-04-04 01:13:37.517266 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:13:37.517271 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:13:37.517276 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:13:37.517281 | orchestrator | 2026-04-04 01:13:37.517286 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-04 01:13:37.517291 | orchestrator | 2026-04-04 01:13:37.517296 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-04 01:13:37.517301 | orchestrator | Saturday 04 April 2026 01:13:36 +0000 (0:00:05.614) 0:00:08.522 ******** 2026-04-04 01:13:37.517306 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:13:37.517311 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:13:37.517316 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:13:37.517321 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:13:37.517326 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:13:37.517331 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:13:37.517335 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:13:37.517341 | orchestrator | 2026-04-04 01:13:37.517346 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:13:37.517351 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517357 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517362 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517367 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517372 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517377 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517382 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:13:37.517387 | orchestrator | 2026-04-04 01:13:37.517392 | orchestrator | 2026-04-04 01:13:37.517397 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:13:37.517402 | orchestrator | Saturday 04 April 2026 01:13:37 +0000 (0:00:00.697) 0:00:09.220 ******** 2026-04-04 01:13:37.517407 | orchestrator | =============================================================================== 2026-04-04 01:13:37.517412 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.61s 2026-04-04 01:13:37.517421 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2026-04-04 01:13:37.517426 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-04-04 01:13:37.517431 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2026-04-04 01:13:37.682684 | orchestrator | + osism validate ceph-mons 2026-04-04 01:14:07.174632 | orchestrator | 2026-04-04 01:14:07.174694 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-04 01:14:07.174703 | orchestrator | 2026-04-04 01:14:07.174711 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:14:07.174717 | orchestrator | Saturday 04 April 2026 01:13:52 +0000 (0:00:00.387) 0:00:00.387 ******** 2026-04-04 01:14:07.174724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.174731 | orchestrator | 2026-04-04 01:14:07.174738 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:14:07.174745 | orchestrator | Saturday 04 April 2026 01:13:53 +0000 (0:00:00.854) 0:00:01.242 ******** 2026-04-04 01:14:07.174751 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.174758 | orchestrator | 2026-04-04 01:14:07.174764 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:14:07.174771 | orchestrator | Saturday 04 April 2026 01:13:53 +0000 (0:00:00.531) 0:00:01.773 ******** 2026-04-04 01:14:07.174777 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.174784 | orchestrator | 2026-04-04 01:14:07.174790 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-04 01:14:07.174796 | orchestrator | Saturday 04 April 2026 01:13:53 +0000 (0:00:00.107) 0:00:01.880 ******** 2026-04-04 01:14:07.174802 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.174808 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:07.174815 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:07.174821 | orchestrator | 2026-04-04 01:14:07.174827 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-04 01:14:07.174834 | orchestrator | Saturday 04 April 2026 01:13:54 +0000 (0:00:00.279) 0:00:02.160 ******** 2026-04-04 01:14:07.174841 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:07.174847 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:07.174853 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.174860 | orchestrator | 2026-04-04 01:14:07.174866 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-04 01:14:07.174873 | orchestrator | Saturday 04 April 2026 01:13:55 +0000 (0:00:01.465) 0:00:03.625 ******** 2026-04-04 01:14:07.174880 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.174886 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:14:07.174892 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:14:07.174898 | orchestrator | 2026-04-04 01:14:07.174905 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-04 01:14:07.174911 | orchestrator | Saturday 04 April 2026 01:13:55 +0000 (0:00:00.220) 0:00:03.846 ******** 2026-04-04 01:14:07.174917 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.174923 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:07.174929 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:07.174935 | orchestrator | 2026-04-04 01:14:07.174941 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:14:07.174948 | orchestrator | Saturday 04 April 2026 01:13:56 +0000 (0:00:00.240) 0:00:04.087 ******** 2026-04-04 01:14:07.174953 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.174959 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:07.174966 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:07.174972 | orchestrator | 2026-04-04 01:14:07.174978 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-04 01:14:07.174984 | orchestrator | Saturday 04 April 2026 01:13:56 +0000 (0:00:00.266) 0:00:04.354 ******** 2026-04-04 01:14:07.174990 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175009 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:14:07.175015 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:14:07.175021 | orchestrator | 2026-04-04 01:14:07.175027 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-04 01:14:07.175033 | orchestrator | Saturday 04 April 2026 01:13:56 +0000 (0:00:00.348) 0:00:04.702 ******** 2026-04-04 01:14:07.175039 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175044 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:07.175050 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:07.175056 | orchestrator | 2026-04-04 01:14:07.175072 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:14:07.175078 | orchestrator | Saturday 04 April 2026 01:13:56 +0000 (0:00:00.247) 0:00:04.949 ******** 2026-04-04 01:14:07.175084 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175090 | orchestrator | 2026-04-04 01:14:07.175096 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:14:07.175102 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.211) 0:00:05.161 ******** 2026-04-04 01:14:07.175108 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175114 | orchestrator | 2026-04-04 01:14:07.175120 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:14:07.175126 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.218) 0:00:05.380 ******** 2026-04-04 01:14:07.175132 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175138 | orchestrator | 2026-04-04 01:14:07.175144 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:07.175150 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.220) 0:00:05.601 ******** 2026-04-04 01:14:07.175156 | orchestrator | 2026-04-04 01:14:07.175162 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:07.175168 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.063) 0:00:05.665 ******** 2026-04-04 01:14:07.175174 | orchestrator | 2026-04-04 01:14:07.175180 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:07.175186 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.063) 0:00:05.728 ******** 2026-04-04 01:14:07.175192 | orchestrator | 2026-04-04 01:14:07.175198 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:14:07.175205 | orchestrator | Saturday 04 April 2026 01:13:57 +0000 (0:00:00.161) 0:00:05.889 ******** 2026-04-04 01:14:07.175211 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175218 | orchestrator | 2026-04-04 01:14:07.175224 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-04 01:14:07.175231 | orchestrator | Saturday 04 April 2026 01:13:58 +0000 (0:00:00.234) 0:00:06.124 ******** 2026-04-04 01:14:07.175237 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175243 | orchestrator | 2026-04-04 01:14:07.175259 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-04 01:14:07.175266 | orchestrator | Saturday 04 April 2026 01:13:58 +0000 (0:00:00.212) 0:00:06.337 ******** 2026-04-04 01:14:07.175272 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175278 | orchestrator | 2026-04-04 01:14:07.175284 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-04 01:14:07.175291 | orchestrator | Saturday 04 April 2026 01:13:58 +0000 (0:00:00.117) 0:00:06.455 ******** 2026-04-04 01:14:07.175297 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:14:07.175303 | orchestrator | 2026-04-04 01:14:07.175309 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-04 01:14:07.175316 | orchestrator | Saturday 04 April 2026 01:14:00 +0000 (0:00:02.015) 0:00:08.470 ******** 2026-04-04 01:14:07.175322 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175328 | orchestrator | 2026-04-04 01:14:07.175335 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-04 01:14:07.175341 | orchestrator | Saturday 04 April 2026 01:14:00 +0000 (0:00:00.302) 0:00:08.773 ******** 2026-04-04 01:14:07.175352 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175358 | orchestrator | 2026-04-04 01:14:07.175365 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-04 01:14:07.175371 | orchestrator | Saturday 04 April 2026 01:14:00 +0000 (0:00:00.115) 0:00:08.888 ******** 2026-04-04 01:14:07.175377 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175384 | orchestrator | 2026-04-04 01:14:07.175390 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-04 01:14:07.175397 | orchestrator | Saturday 04 April 2026 01:14:01 +0000 (0:00:00.295) 0:00:09.183 ******** 2026-04-04 01:14:07.175405 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175412 | orchestrator | 2026-04-04 01:14:07.175418 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-04 01:14:07.175424 | orchestrator | Saturday 04 April 2026 01:14:01 +0000 (0:00:00.267) 0:00:09.451 ******** 2026-04-04 01:14:07.175430 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175437 | orchestrator | 2026-04-04 01:14:07.175443 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-04 01:14:07.175449 | orchestrator | Saturday 04 April 2026 01:14:01 +0000 (0:00:00.106) 0:00:09.557 ******** 2026-04-04 01:14:07.175455 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175462 | orchestrator | 2026-04-04 01:14:07.175468 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-04 01:14:07.175475 | orchestrator | Saturday 04 April 2026 01:14:01 +0000 (0:00:00.116) 0:00:09.673 ******** 2026-04-04 01:14:07.175481 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175487 | orchestrator | 2026-04-04 01:14:07.175493 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-04 01:14:07.175500 | orchestrator | Saturday 04 April 2026 01:14:01 +0000 (0:00:00.253) 0:00:09.926 ******** 2026-04-04 01:14:07.175506 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:14:07.175512 | orchestrator | 2026-04-04 01:14:07.175519 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-04 01:14:07.175525 | orchestrator | Saturday 04 April 2026 01:14:03 +0000 (0:00:01.312) 0:00:11.239 ******** 2026-04-04 01:14:07.175531 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175537 | orchestrator | 2026-04-04 01:14:07.175557 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-04 01:14:07.175563 | orchestrator | Saturday 04 April 2026 01:14:03 +0000 (0:00:00.316) 0:00:11.556 ******** 2026-04-04 01:14:07.175570 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175576 | orchestrator | 2026-04-04 01:14:07.175582 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-04 01:14:07.175588 | orchestrator | Saturday 04 April 2026 01:14:03 +0000 (0:00:00.138) 0:00:11.694 ******** 2026-04-04 01:14:07.175594 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:07.175600 | orchestrator | 2026-04-04 01:14:07.175606 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-04 01:14:07.175612 | orchestrator | Saturday 04 April 2026 01:14:03 +0000 (0:00:00.136) 0:00:11.831 ******** 2026-04-04 01:14:07.175618 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175623 | orchestrator | 2026-04-04 01:14:07.175629 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-04 01:14:07.175636 | orchestrator | Saturday 04 April 2026 01:14:03 +0000 (0:00:00.138) 0:00:11.970 ******** 2026-04-04 01:14:07.175642 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175648 | orchestrator | 2026-04-04 01:14:07.175655 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:14:07.175661 | orchestrator | Saturday 04 April 2026 01:14:04 +0000 (0:00:00.129) 0:00:12.099 ******** 2026-04-04 01:14:07.175667 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.175674 | orchestrator | 2026-04-04 01:14:07.175680 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:14:07.175686 | orchestrator | Saturday 04 April 2026 01:14:04 +0000 (0:00:00.240) 0:00:12.339 ******** 2026-04-04 01:14:07.175698 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:07.175703 | orchestrator | 2026-04-04 01:14:07.175712 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:14:07.175719 | orchestrator | Saturday 04 April 2026 01:14:04 +0000 (0:00:00.237) 0:00:12.576 ******** 2026-04-04 01:14:07.175724 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.175730 | orchestrator | 2026-04-04 01:14:07.175736 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:14:07.175742 | orchestrator | Saturday 04 April 2026 01:14:06 +0000 (0:00:01.791) 0:00:14.368 ******** 2026-04-04 01:14:07.175748 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.175754 | orchestrator | 2026-04-04 01:14:07.175760 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:14:07.175765 | orchestrator | Saturday 04 April 2026 01:14:06 +0000 (0:00:00.249) 0:00:14.618 ******** 2026-04-04 01:14:07.175772 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:07.175778 | orchestrator | 2026-04-04 01:14:07.175788 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:09.325663 | orchestrator | Saturday 04 April 2026 01:14:07 +0000 (0:00:00.598) 0:00:15.217 ******** 2026-04-04 01:14:09.326337 | orchestrator | 2026-04-04 01:14:09.326366 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:09.326379 | orchestrator | Saturday 04 April 2026 01:14:07 +0000 (0:00:00.068) 0:00:15.285 ******** 2026-04-04 01:14:09.326389 | orchestrator | 2026-04-04 01:14:09.326400 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:09.326411 | orchestrator | Saturday 04 April 2026 01:14:07 +0000 (0:00:00.066) 0:00:15.352 ******** 2026-04-04 01:14:09.326422 | orchestrator | 2026-04-04 01:14:09.326432 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:14:09.326442 | orchestrator | Saturday 04 April 2026 01:14:07 +0000 (0:00:00.069) 0:00:15.421 ******** 2026-04-04 01:14:09.326453 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:09.326463 | orchestrator | 2026-04-04 01:14:09.326475 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:14:09.326485 | orchestrator | Saturday 04 April 2026 01:14:08 +0000 (0:00:01.254) 0:00:16.676 ******** 2026-04-04 01:14:09.326495 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:14:09.326506 | orchestrator |  "msg": [ 2026-04-04 01:14:09.326517 | orchestrator |  "Validator run completed.", 2026-04-04 01:14:09.326528 | orchestrator |  "You can find the report file here:", 2026-04-04 01:14:09.326539 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-04T01:13:53+00:00-report.json", 2026-04-04 01:14:09.326567 | orchestrator |  "on the following host:", 2026-04-04 01:14:09.326578 | orchestrator |  "testbed-manager" 2026-04-04 01:14:09.326590 | orchestrator |  ] 2026-04-04 01:14:09.326601 | orchestrator | } 2026-04-04 01:14:09.326611 | orchestrator | 2026-04-04 01:14:09.326620 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:14:09.326631 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-04 01:14:09.326641 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:14:09.326651 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:14:09.326661 | orchestrator | 2026-04-04 01:14:09.326671 | orchestrator | 2026-04-04 01:14:09.326680 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:14:09.326689 | orchestrator | Saturday 04 April 2026 01:14:09 +0000 (0:00:00.422) 0:00:17.098 ******** 2026-04-04 01:14:09.326718 | orchestrator | =============================================================================== 2026-04-04 01:14:09.326727 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.02s 2026-04-04 01:14:09.326737 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-04-04 01:14:09.326746 | orchestrator | Get container info ------------------------------------------------------ 1.47s 2026-04-04 01:14:09.326754 | orchestrator | Gather status data ------------------------------------------------------ 1.31s 2026-04-04 01:14:09.326763 | orchestrator | Write report file ------------------------------------------------------- 1.25s 2026-04-04 01:14:09.326771 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-04-04 01:14:09.326780 | orchestrator | Aggregate test results step three --------------------------------------- 0.60s 2026-04-04 01:14:09.326788 | orchestrator | Create report output directory ------------------------------------------ 0.53s 2026-04-04 01:14:09.326797 | orchestrator | Print report file information ------------------------------------------- 0.42s 2026-04-04 01:14:09.326805 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.35s 2026-04-04 01:14:09.326814 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-04-04 01:14:09.326822 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2026-04-04 01:14:09.326831 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2026-04-04 01:14:09.326840 | orchestrator | Flush handlers ---------------------------------------------------------- 0.29s 2026-04-04 01:14:09.326849 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2026-04-04 01:14:09.326857 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.27s 2026-04-04 01:14:09.326866 | orchestrator | Prepare test data ------------------------------------------------------- 0.27s 2026-04-04 01:14:09.326875 | orchestrator | Prepare status test vars ------------------------------------------------ 0.25s 2026-04-04 01:14:09.326884 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2026-04-04 01:14:09.326893 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.25s 2026-04-04 01:14:09.493065 | orchestrator | + osism validate ceph-mgrs 2026-04-04 01:14:37.562420 | orchestrator | 2026-04-04 01:14:37.562473 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-04 01:14:37.562479 | orchestrator | 2026-04-04 01:14:37.562484 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:14:37.562488 | orchestrator | Saturday 04 April 2026 01:14:24 +0000 (0:00:00.398) 0:00:00.398 ******** 2026-04-04 01:14:37.562492 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.562496 | orchestrator | 2026-04-04 01:14:37.562500 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:14:37.562504 | orchestrator | Saturday 04 April 2026 01:14:25 +0000 (0:00:00.833) 0:00:01.232 ******** 2026-04-04 01:14:37.562508 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.562512 | orchestrator | 2026-04-04 01:14:37.562516 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:14:37.562520 | orchestrator | Saturday 04 April 2026 01:14:25 +0000 (0:00:00.613) 0:00:01.845 ******** 2026-04-04 01:14:37.562524 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562528 | orchestrator | 2026-04-04 01:14:37.562532 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-04 01:14:37.562544 | orchestrator | Saturday 04 April 2026 01:14:25 +0000 (0:00:00.114) 0:00:01.959 ******** 2026-04-04 01:14:37.562551 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562556 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:37.562592 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:37.562596 | orchestrator | 2026-04-04 01:14:37.562600 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-04 01:14:37.562604 | orchestrator | Saturday 04 April 2026 01:14:26 +0000 (0:00:00.248) 0:00:02.208 ******** 2026-04-04 01:14:37.562619 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:37.562623 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562627 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:37.562631 | orchestrator | 2026-04-04 01:14:37.562635 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-04 01:14:37.562638 | orchestrator | Saturday 04 April 2026 01:14:27 +0000 (0:00:01.471) 0:00:03.679 ******** 2026-04-04 01:14:37.562642 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562646 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:14:37.562650 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:14:37.562653 | orchestrator | 2026-04-04 01:14:37.562659 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-04 01:14:37.562663 | orchestrator | Saturday 04 April 2026 01:14:27 +0000 (0:00:00.248) 0:00:03.928 ******** 2026-04-04 01:14:37.562667 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562671 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:37.562674 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:37.562678 | orchestrator | 2026-04-04 01:14:37.562682 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:14:37.562692 | orchestrator | Saturday 04 April 2026 01:14:28 +0000 (0:00:00.276) 0:00:04.205 ******** 2026-04-04 01:14:37.562696 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562704 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:37.562708 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:37.562712 | orchestrator | 2026-04-04 01:14:37.562716 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-04 01:14:37.562719 | orchestrator | Saturday 04 April 2026 01:14:28 +0000 (0:00:00.278) 0:00:04.484 ******** 2026-04-04 01:14:37.562723 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562727 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:14:37.562731 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:14:37.562735 | orchestrator | 2026-04-04 01:14:37.562738 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-04 01:14:37.562742 | orchestrator | Saturday 04 April 2026 01:14:28 +0000 (0:00:00.358) 0:00:04.842 ******** 2026-04-04 01:14:37.562746 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562750 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:14:37.562753 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:14:37.562757 | orchestrator | 2026-04-04 01:14:37.562761 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:14:37.562765 | orchestrator | Saturday 04 April 2026 01:14:28 +0000 (0:00:00.261) 0:00:05.104 ******** 2026-04-04 01:14:37.562768 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562772 | orchestrator | 2026-04-04 01:14:37.562776 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:14:37.562780 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.244) 0:00:05.348 ******** 2026-04-04 01:14:37.562783 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562787 | orchestrator | 2026-04-04 01:14:37.562791 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:14:37.562795 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.215) 0:00:05.564 ******** 2026-04-04 01:14:37.562799 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562803 | orchestrator | 2026-04-04 01:14:37.562806 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.562810 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.211) 0:00:05.776 ******** 2026-04-04 01:14:37.562814 | orchestrator | 2026-04-04 01:14:37.562818 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.562821 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.087) 0:00:05.864 ******** 2026-04-04 01:14:37.562825 | orchestrator | 2026-04-04 01:14:37.562829 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.562833 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.077) 0:00:05.942 ******** 2026-04-04 01:14:37.562840 | orchestrator | 2026-04-04 01:14:37.562844 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:14:37.562848 | orchestrator | Saturday 04 April 2026 01:14:29 +0000 (0:00:00.199) 0:00:06.141 ******** 2026-04-04 01:14:37.562851 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562855 | orchestrator | 2026-04-04 01:14:37.562859 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-04 01:14:37.562863 | orchestrator | Saturday 04 April 2026 01:14:30 +0000 (0:00:00.254) 0:00:06.395 ******** 2026-04-04 01:14:37.562867 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562870 | orchestrator | 2026-04-04 01:14:37.562883 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-04 01:14:37.562887 | orchestrator | Saturday 04 April 2026 01:14:30 +0000 (0:00:00.231) 0:00:06.627 ******** 2026-04-04 01:14:37.562891 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562895 | orchestrator | 2026-04-04 01:14:37.562899 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-04 01:14:37.562902 | orchestrator | Saturday 04 April 2026 01:14:30 +0000 (0:00:00.134) 0:00:06.761 ******** 2026-04-04 01:14:37.562906 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:14:37.562910 | orchestrator | 2026-04-04 01:14:37.562914 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-04 01:14:37.562917 | orchestrator | Saturday 04 April 2026 01:14:32 +0000 (0:00:01.726) 0:00:08.488 ******** 2026-04-04 01:14:37.562921 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562925 | orchestrator | 2026-04-04 01:14:37.562929 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-04 01:14:37.562932 | orchestrator | Saturday 04 April 2026 01:14:32 +0000 (0:00:00.236) 0:00:08.724 ******** 2026-04-04 01:14:37.562936 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562940 | orchestrator | 2026-04-04 01:14:37.562944 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-04 01:14:37.562947 | orchestrator | Saturday 04 April 2026 01:14:32 +0000 (0:00:00.300) 0:00:09.025 ******** 2026-04-04 01:14:37.562951 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.562955 | orchestrator | 2026-04-04 01:14:37.562959 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-04 01:14:37.562962 | orchestrator | Saturday 04 April 2026 01:14:32 +0000 (0:00:00.133) 0:00:09.159 ******** 2026-04-04 01:14:37.562966 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:14:37.562970 | orchestrator | 2026-04-04 01:14:37.562974 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:14:37.562977 | orchestrator | Saturday 04 April 2026 01:14:33 +0000 (0:00:00.145) 0:00:09.304 ******** 2026-04-04 01:14:37.562981 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.562985 | orchestrator | 2026-04-04 01:14:37.562989 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:14:37.562992 | orchestrator | Saturday 04 April 2026 01:14:33 +0000 (0:00:00.262) 0:00:09.567 ******** 2026-04-04 01:14:37.562998 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:14:37.563002 | orchestrator | 2026-04-04 01:14:37.563006 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:14:37.563010 | orchestrator | Saturday 04 April 2026 01:14:33 +0000 (0:00:00.236) 0:00:09.804 ******** 2026-04-04 01:14:37.563013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.563017 | orchestrator | 2026-04-04 01:14:37.563021 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:14:37.563025 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:01.484) 0:00:11.289 ******** 2026-04-04 01:14:37.563029 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.563032 | orchestrator | 2026-04-04 01:14:37.563036 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:14:37.563040 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:00.281) 0:00:11.570 ******** 2026-04-04 01:14:37.563047 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.563050 | orchestrator | 2026-04-04 01:14:37.563054 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.563058 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:00.264) 0:00:11.834 ******** 2026-04-04 01:14:37.563061 | orchestrator | 2026-04-04 01:14:37.563065 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.563069 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:00.087) 0:00:11.922 ******** 2026-04-04 01:14:37.563073 | orchestrator | 2026-04-04 01:14:37.563076 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:14:37.563080 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:00.072) 0:00:11.994 ******** 2026-04-04 01:14:37.563084 | orchestrator | 2026-04-04 01:14:37.563088 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:14:37.563091 | orchestrator | Saturday 04 April 2026 01:14:35 +0000 (0:00:00.071) 0:00:12.066 ******** 2026-04-04 01:14:37.563095 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:37.563099 | orchestrator | 2026-04-04 01:14:37.563103 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:14:37.563106 | orchestrator | Saturday 04 April 2026 01:14:37 +0000 (0:00:01.269) 0:00:13.335 ******** 2026-04-04 01:14:37.563110 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:14:37.563114 | orchestrator |  "msg": [ 2026-04-04 01:14:37.563118 | orchestrator |  "Validator run completed.", 2026-04-04 01:14:37.563122 | orchestrator |  "You can find the report file here:", 2026-04-04 01:14:37.563125 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-04T01:14:24+00:00-report.json", 2026-04-04 01:14:37.563130 | orchestrator |  "on the following host:", 2026-04-04 01:14:37.563134 | orchestrator |  "testbed-manager" 2026-04-04 01:14:37.563138 | orchestrator |  ] 2026-04-04 01:14:37.563142 | orchestrator | } 2026-04-04 01:14:37.563145 | orchestrator | 2026-04-04 01:14:37.563149 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:14:37.563154 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:14:37.563158 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:14:37.563165 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:14:37.856966 | orchestrator | 2026-04-04 01:14:37.857029 | orchestrator | 2026-04-04 01:14:37.857040 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:14:37.857050 | orchestrator | Saturday 04 April 2026 01:14:37 +0000 (0:00:00.388) 0:00:13.724 ******** 2026-04-04 01:14:37.857059 | orchestrator | =============================================================================== 2026-04-04 01:14:37.857067 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.73s 2026-04-04 01:14:37.857076 | orchestrator | Aggregate test results step one ----------------------------------------- 1.48s 2026-04-04 01:14:37.857084 | orchestrator | Get container info ------------------------------------------------------ 1.47s 2026-04-04 01:14:37.857092 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2026-04-04 01:14:37.857101 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-04-04 01:14:37.857109 | orchestrator | Create report output directory ------------------------------------------ 0.61s 2026-04-04 01:14:37.857117 | orchestrator | Print report file information ------------------------------------------- 0.39s 2026-04-04 01:14:37.857125 | orchestrator | Flush handlers ---------------------------------------------------------- 0.36s 2026-04-04 01:14:37.857150 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.36s 2026-04-04 01:14:37.857158 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-04-04 01:14:37.857167 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-04-04 01:14:37.857175 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2026-04-04 01:14:37.857183 | orchestrator | Set test result to passed if container is existing ---------------------- 0.28s 2026-04-04 01:14:37.857192 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2026-04-04 01:14:37.857200 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2026-04-04 01:14:37.857208 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.26s 2026-04-04 01:14:37.857218 | orchestrator | Print report file information ------------------------------------------- 0.25s 2026-04-04 01:14:37.857226 | orchestrator | Set test result to failed if container is missing ----------------------- 0.25s 2026-04-04 01:14:37.857235 | orchestrator | Prepare test data for container existance test -------------------------- 0.25s 2026-04-04 01:14:37.857243 | orchestrator | Aggregate test results step one ----------------------------------------- 0.24s 2026-04-04 01:14:38.031446 | orchestrator | + osism validate ceph-osds 2026-04-04 01:14:56.886229 | orchestrator | 2026-04-04 01:14:56.886318 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-04 01:14:56.886326 | orchestrator | 2026-04-04 01:14:56.886331 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-04 01:14:56.886336 | orchestrator | Saturday 04 April 2026 01:14:52 +0000 (0:00:00.511) 0:00:00.511 ******** 2026-04-04 01:14:56.886341 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:56.886345 | orchestrator | 2026-04-04 01:14:56.886349 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-04 01:14:56.886353 | orchestrator | Saturday 04 April 2026 01:14:53 +0000 (0:00:00.984) 0:00:01.496 ******** 2026-04-04 01:14:56.886357 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:56.886361 | orchestrator | 2026-04-04 01:14:56.886365 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-04 01:14:56.886369 | orchestrator | Saturday 04 April 2026 01:14:54 +0000 (0:00:00.269) 0:00:01.765 ******** 2026-04-04 01:14:56.886372 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:14:56.886376 | orchestrator | 2026-04-04 01:14:56.886380 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-04 01:14:56.886384 | orchestrator | Saturday 04 April 2026 01:14:54 +0000 (0:00:00.670) 0:00:02.436 ******** 2026-04-04 01:14:56.886388 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:14:56.886392 | orchestrator | 2026-04-04 01:14:56.886396 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-04 01:14:56.886400 | orchestrator | Saturday 04 April 2026 01:14:55 +0000 (0:00:00.115) 0:00:02.551 ******** 2026-04-04 01:14:56.886404 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:14:56.886408 | orchestrator | 2026-04-04 01:14:56.886412 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-04 01:14:56.886415 | orchestrator | Saturday 04 April 2026 01:14:55 +0000 (0:00:00.126) 0:00:02.678 ******** 2026-04-04 01:14:56.886419 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:14:56.886423 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:14:56.886427 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:14:56.886430 | orchestrator | 2026-04-04 01:14:56.886434 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-04 01:14:56.886438 | orchestrator | Saturday 04 April 2026 01:14:55 +0000 (0:00:00.417) 0:00:03.095 ******** 2026-04-04 01:14:56.886442 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:14:56.886445 | orchestrator | 2026-04-04 01:14:56.886449 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-04 01:14:56.886475 | orchestrator | Saturday 04 April 2026 01:14:55 +0000 (0:00:00.140) 0:00:03.236 ******** 2026-04-04 01:14:56.886482 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:14:56.886488 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:14:56.886494 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:14:56.886500 | orchestrator | 2026-04-04 01:14:56.886506 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-04 01:14:56.886512 | orchestrator | Saturday 04 April 2026 01:14:56 +0000 (0:00:00.315) 0:00:03.552 ******** 2026-04-04 01:14:56.886518 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:14:56.886524 | orchestrator | 2026-04-04 01:14:56.886545 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:14:56.886552 | orchestrator | Saturday 04 April 2026 01:14:56 +0000 (0:00:00.336) 0:00:03.889 ******** 2026-04-04 01:14:56.886559 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:14:56.886641 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:14:56.886651 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:14:56.886657 | orchestrator | 2026-04-04 01:14:56.886664 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-04 01:14:56.886670 | orchestrator | Saturday 04 April 2026 01:14:56 +0000 (0:00:00.269) 0:00:04.158 ******** 2026-04-04 01:14:56.886678 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c36dec8b67784fd47b2d5b477ed5d918b8b00ed8d5a4060290764779ee8bb414', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:56.886688 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a77f8ea68537d306e5271b7003255f2cb8773aad26714ba67c24f28a4e4087c1', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:56.886695 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b273cdb4da7ffc9d783173baab7d289017595ecbad842915457aa4d06f05cc4b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:56.886703 | orchestrator | skipping: [testbed-node-3] => (item={'id': '50f2208c4e970b2b061a77ec886f21702663660e1887c482cd4ed244c47513ed', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:56.886722 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eb8bf4df2cea29b0a0d6048c50b5d12bff83aba5dbcb221e82eb0754f0eecbbc', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-04 01:14:56.886744 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'baca150cdd14d85ea56334cdfb7ad23a70bc8560e9e9a33283ff3769210ed17a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:56.886751 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e36476afff82192ad0f1e4946bd654133bab6729726601bf74768fe721b305ff', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:56.886756 | orchestrator | skipping: [testbed-node-3] => (item={'id': '20271786a707c5d8e202dbd88cbe9e66cd9a9ed3cd747e11a37940d4c4c5d4ea', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:14:56.886764 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba731a3f6bdfc721647c2cdc847f3f4016b5f45a317ab79bb6d1d612f259b711', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:56.886768 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31515b0ddb2cdfb34145b18ac8fb7dda22249ebf90e748aa58447080b47d445b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:56.886779 | orchestrator | ok: [testbed-node-3] => (item={'id': '78131ba14fd653e574108f801f089160dd088a70d6e8bb1eac3c3a42f08fd31e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:56.886785 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ae7bfe017985a259f58f5299f43496256434299a0cd3a9f18c1b061edbcb7ad5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:56.886789 | orchestrator | skipping: [testbed-node-3] => (item={'id': '65f72e043c133af03853233dc00e77afb207f50369ab88cf3b8b8838fa09d9d7', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-04 01:14:56.886794 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0deca1364fd3d4b8251efa07dc7976e084a1aa71707a6934951540204009fd3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:14:56.886799 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e68c7ab07b46a6790637a65eb86e109ad60651a8f566fce2872a8e652689b9a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:14:56.886803 | orchestrator | skipping: [testbed-node-3] => (item={'id': '957c61371643ec6c07b91c032f63f959a8aea2e2f926de7d856ff1555362b21e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:56.886808 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8cf51d74ba404c1bbff0079e684d24c3a66c0e7712e80704c3964e96f59333b6', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:56.886812 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5bcc21cb0e8dde479eeb47a46c7ad775dfa6d47a6b777effe5c6e06415ef7679', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:14:56.886817 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fef0e903e9afbb95d2c45847a92548ee91cad551bf0d04a9d20daab7244fc15e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:56.886822 | orchestrator | skipping: [testbed-node-4] => (item={'id': '36c7de1069abd368e8904c10f6ea4722562b624ae95071db8e4c2176034b72bf', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:56.886830 | orchestrator | skipping: [testbed-node-4] => (item={'id': '92d5203e9bcb6a3d913834dd3a5d7bb05a45136c6f9a9ad7d2de484050ac891d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:56.886840 | orchestrator | skipping: [testbed-node-4] => (item={'id': '405b0abfc1b7370fe80387247f0bbbed22c195e12638db2612113a8e11e2cafc', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:57.012064 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c07f73a77b01bfc2bbf2e719acdacf9ad94bb1729424edf27c38bb78f8a8ca12', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-04 01:14:57.012160 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6bc3e07d6f1f888a9b5f48e799eabd1eb210e079dc763d36ee51d986a881801f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:57.012194 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d029037432e2aec41be146b248c5cab36db4466384aff3f4dc7f09a178c433d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:57.012201 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2413e4e66071216ca161d7e8aa934c7d836938badec4202bd0d17d2b337ab2fa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:14:57.012207 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa96adabacb39ebee024bd7ed380ebd50c2eb7241fcd1dc252bfcf57ab8c0444', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:57.012214 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a99fca2381fef377a88531ec30f9c409b5e053cfa440cac1c0beb5cfa4761fc2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:57.012224 | orchestrator | ok: [testbed-node-4] => (item={'id': '8ad4fe044c0a7d8aa5ffa1566164980fc2b55ee43e514adc4a3c1ad6a3a48639', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:57.012231 | orchestrator | ok: [testbed-node-4] => (item={'id': '797364c40a11ff797c74582f377df0915438c7ae06d2af97b9cac8ed490b4c6b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:57.012238 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2a7d1a74e6375ff51c20beb294dbf5dd7b4959443b0f8e5eb3b54f2b9c4093c0', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-04 01:14:57.012245 | orchestrator | skipping: [testbed-node-4] => (item={'id': '106bb1187d2cccba58ed58d2643c581179537d2fa7ac32b0ff56f871ac1953f1', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:14:57.012251 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9940979f7f90d1577db528575b80d631e925ed97add50c4ed842fb5936926898', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:14:57.012259 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8cbb4ba97979157c3bbd5b355b895114d73c17449339313fd8faaff5604714a5', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:57.012265 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9a9db59347b17c02ba05d9fa5aa4da82a024c1f5ee75d2223830825139957ae9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:57.012272 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6448b098e8fff0ffbebd15cb0308e4d8129cf141eb97d766b43fc3f73ef92653', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:14:57.012279 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b3105c6288f36f2b66d13ce5517f2233edc18d153f080a3467c1364403e0c7a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:57.012302 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8dffe1b84294a4ab8929dc15bb7728c782ac9e545b21a1298682816c6e157784', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2026-04-04 01:14:57.012319 | orchestrator | skipping: [testbed-node-5] => (item={'id': '352a3a3ed0d37495039d20f57ae484da1a88377cddc322a371ded7c1f16a941d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:57.012327 | orchestrator | skipping: [testbed-node-5] => (item={'id': '19baa0fe61ecbd63dc0e2f4fead5e06bc543daff6c2eebcc162cfe63cbf4d058', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2026-04-04 01:14:57.012333 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3332c3afc362fe29610ac5f45dc56fbffb31aacf382719370c45d20eafde4d6d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2026-04-04 01:14:57.012340 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1e3049a849f4e98f845a4a9c094ff14f72f4cf12f924632635255a5c0cd10e21', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:57.012347 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c663195cde9c32df594311a1697c5208308ed2e2b760d543b83d592dd82f1a4', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2026-04-04 01:14:57.012353 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b682360ba14ff5d95bc1a01648775d96534d6eab49bdc1a0e57b4a166c238e82', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2026-04-04 01:14:57.012358 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ebb994f33563d473c5028c2eaa69c7aa96a1023da91667b177f6cf1d3a827bd0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:57.012364 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd42ab815c4ea6b3f36792590ac42fee045fec549719a36d78cf8cbec612021ed', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2026-04-04 01:14:57.012379 | orchestrator | ok: [testbed-node-5] => (item={'id': '30a7e0a3f631b393a2cb7d414f6b7ca383da611575b25501a7636a8ad8ff4227', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:57.012385 | orchestrator | ok: [testbed-node-5] => (item={'id': '23c2b512815a7b9c06d257f77ae03225d1af7d7e87e3eb948ee6db7e4ca0d613', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 22 minutes'}) 2026-04-04 01:14:57.012391 | orchestrator | skipping: [testbed-node-5] => (item={'id': '85be22e13c1e0a2732ce47d45c1bb1065f9d7de71a1d0e43e98288c0122c2d6c', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 25 minutes'})  2026-04-04 01:14:57.012398 | orchestrator | skipping: [testbed-node-5] => (item={'id': '216bbc95ece8e32f34b1b581d1d769e3149b63445aceb404a5d219b134615c79', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 26 minutes (healthy)'})  2026-04-04 01:14:57.012403 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c95e18aabac975587e502f917122bb6c47ee84f142ce43acd513d5f29605b695', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2026-04-04 01:14:57.012413 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c031a002d93f460b7d7e4ea2398e53b795a4f8e489209b1c66c827bf7a713555', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:57.012424 | orchestrator | skipping: [testbed-node-5] => (item={'id': '36aea5c1849e7fd5116ddc82fc89ebf0b3bd0aeb9c81455474671f2d67945cb7', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 27 minutes'})  2026-04-04 01:14:57.012435 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5384d51fbff0d795601ced6ed7e6cdc6d7172cd89ebfc401225a31586ed70db0', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 28 minutes'})  2026-04-04 01:15:09.788252 | orchestrator | 2026-04-04 01:15:09.788356 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-04 01:15:09.788366 | orchestrator | Saturday 04 April 2026 01:14:57 +0000 (0:00:00.606) 0:00:04.765 ******** 2026-04-04 01:15:09.788371 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.788376 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.788380 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.788384 | orchestrator | 2026-04-04 01:15:09.788389 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-04 01:15:09.788393 | orchestrator | Saturday 04 April 2026 01:14:57 +0000 (0:00:00.274) 0:00:05.040 ******** 2026-04-04 01:15:09.788398 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788402 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.788406 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.788410 | orchestrator | 2026-04-04 01:15:09.788414 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-04 01:15:09.788419 | orchestrator | Saturday 04 April 2026 01:14:57 +0000 (0:00:00.295) 0:00:05.335 ******** 2026-04-04 01:15:09.788423 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.788427 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.788431 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.788435 | orchestrator | 2026-04-04 01:15:09.788439 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:15:09.788443 | orchestrator | Saturday 04 April 2026 01:14:58 +0000 (0:00:00.324) 0:00:05.660 ******** 2026-04-04 01:15:09.788447 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.788451 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.788454 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.788458 | orchestrator | 2026-04-04 01:15:09.788462 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-04 01:15:09.788467 | orchestrator | Saturday 04 April 2026 01:14:58 +0000 (0:00:00.417) 0:00:06.078 ******** 2026-04-04 01:15:09.788471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-04 01:15:09.788476 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-04 01:15:09.788480 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788484 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-04 01:15:09.788488 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-04 01:15:09.788491 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.788495 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-04 01:15:09.788499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-04 01:15:09.788503 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.788507 | orchestrator | 2026-04-04 01:15:09.788510 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-04 01:15:09.788514 | orchestrator | Saturday 04 April 2026 01:14:58 +0000 (0:00:00.328) 0:00:06.406 ******** 2026-04-04 01:15:09.788518 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.788522 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.788543 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.788547 | orchestrator | 2026-04-04 01:15:09.788551 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-04 01:15:09.788555 | orchestrator | Saturday 04 April 2026 01:14:59 +0000 (0:00:00.319) 0:00:06.725 ******** 2026-04-04 01:15:09.788558 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788562 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.788680 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.788689 | orchestrator | 2026-04-04 01:15:09.788695 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-04 01:15:09.788701 | orchestrator | Saturday 04 April 2026 01:14:59 +0000 (0:00:00.267) 0:00:06.993 ******** 2026-04-04 01:15:09.788707 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788713 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.788718 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.788724 | orchestrator | 2026-04-04 01:15:09.788730 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-04 01:15:09.788736 | orchestrator | Saturday 04 April 2026 01:14:59 +0000 (0:00:00.447) 0:00:07.440 ******** 2026-04-04 01:15:09.788741 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.788748 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.788753 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.788759 | orchestrator | 2026-04-04 01:15:09.788765 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:15:09.788771 | orchestrator | Saturday 04 April 2026 01:15:00 +0000 (0:00:00.296) 0:00:07.736 ******** 2026-04-04 01:15:09.788777 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788782 | orchestrator | 2026-04-04 01:15:09.788789 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:15:09.788796 | orchestrator | Saturday 04 April 2026 01:15:00 +0000 (0:00:00.239) 0:00:07.976 ******** 2026-04-04 01:15:09.788819 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788826 | orchestrator | 2026-04-04 01:15:09.788832 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:15:09.788838 | orchestrator | Saturday 04 April 2026 01:15:00 +0000 (0:00:00.251) 0:00:08.227 ******** 2026-04-04 01:15:09.788845 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788851 | orchestrator | 2026-04-04 01:15:09.788857 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:09.788864 | orchestrator | Saturday 04 April 2026 01:15:00 +0000 (0:00:00.252) 0:00:08.480 ******** 2026-04-04 01:15:09.788869 | orchestrator | 2026-04-04 01:15:09.788875 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:09.788881 | orchestrator | Saturday 04 April 2026 01:15:01 +0000 (0:00:00.066) 0:00:08.546 ******** 2026-04-04 01:15:09.788887 | orchestrator | 2026-04-04 01:15:09.788893 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:09.788919 | orchestrator | Saturday 04 April 2026 01:15:01 +0000 (0:00:00.064) 0:00:08.611 ******** 2026-04-04 01:15:09.788927 | orchestrator | 2026-04-04 01:15:09.788933 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:15:09.788938 | orchestrator | Saturday 04 April 2026 01:15:01 +0000 (0:00:00.066) 0:00:08.678 ******** 2026-04-04 01:15:09.788945 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788951 | orchestrator | 2026-04-04 01:15:09.788957 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-04 01:15:09.788963 | orchestrator | Saturday 04 April 2026 01:15:01 +0000 (0:00:00.580) 0:00:09.258 ******** 2026-04-04 01:15:09.788970 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.788976 | orchestrator | 2026-04-04 01:15:09.788983 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:15:09.788989 | orchestrator | Saturday 04 April 2026 01:15:01 +0000 (0:00:00.240) 0:00:09.498 ******** 2026-04-04 01:15:09.788996 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789003 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.789019 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.789026 | orchestrator | 2026-04-04 01:15:09.789031 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-04 01:15:09.789036 | orchestrator | Saturday 04 April 2026 01:15:02 +0000 (0:00:00.280) 0:00:09.779 ******** 2026-04-04 01:15:09.789040 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789045 | orchestrator | 2026-04-04 01:15:09.789049 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-04 01:15:09.789053 | orchestrator | Saturday 04 April 2026 01:15:02 +0000 (0:00:00.226) 0:00:10.006 ******** 2026-04-04 01:15:09.789058 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-04 01:15:09.789062 | orchestrator | 2026-04-04 01:15:09.789067 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-04 01:15:09.789071 | orchestrator | Saturday 04 April 2026 01:15:04 +0000 (0:00:02.179) 0:00:12.186 ******** 2026-04-04 01:15:09.789076 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789080 | orchestrator | 2026-04-04 01:15:09.789085 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-04 01:15:09.789089 | orchestrator | Saturday 04 April 2026 01:15:04 +0000 (0:00:00.123) 0:00:12.309 ******** 2026-04-04 01:15:09.789094 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789098 | orchestrator | 2026-04-04 01:15:09.789103 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-04 01:15:09.789107 | orchestrator | Saturday 04 April 2026 01:15:05 +0000 (0:00:00.310) 0:00:12.619 ******** 2026-04-04 01:15:09.789112 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.789117 | orchestrator | 2026-04-04 01:15:09.789121 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-04 01:15:09.789125 | orchestrator | Saturday 04 April 2026 01:15:05 +0000 (0:00:00.097) 0:00:12.717 ******** 2026-04-04 01:15:09.789128 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789132 | orchestrator | 2026-04-04 01:15:09.789136 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:15:09.789140 | orchestrator | Saturday 04 April 2026 01:15:05 +0000 (0:00:00.140) 0:00:12.858 ******** 2026-04-04 01:15:09.789143 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789147 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.789151 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.789155 | orchestrator | 2026-04-04 01:15:09.789159 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-04 01:15:09.789162 | orchestrator | Saturday 04 April 2026 01:15:05 +0000 (0:00:00.427) 0:00:13.285 ******** 2026-04-04 01:15:09.789166 | orchestrator | changed: [testbed-node-3] 2026-04-04 01:15:09.789170 | orchestrator | changed: [testbed-node-4] 2026-04-04 01:15:09.789174 | orchestrator | changed: [testbed-node-5] 2026-04-04 01:15:09.789177 | orchestrator | 2026-04-04 01:15:09.789181 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-04 01:15:09.789185 | orchestrator | Saturday 04 April 2026 01:15:07 +0000 (0:00:01.775) 0:00:15.061 ******** 2026-04-04 01:15:09.789189 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789192 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.789196 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.789200 | orchestrator | 2026-04-04 01:15:09.789203 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-04 01:15:09.789207 | orchestrator | Saturday 04 April 2026 01:15:07 +0000 (0:00:00.296) 0:00:15.357 ******** 2026-04-04 01:15:09.789211 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789215 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.789218 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.789222 | orchestrator | 2026-04-04 01:15:09.789226 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-04 01:15:09.789230 | orchestrator | Saturday 04 April 2026 01:15:08 +0000 (0:00:00.474) 0:00:15.832 ******** 2026-04-04 01:15:09.789233 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.789237 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.789245 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.789249 | orchestrator | 2026-04-04 01:15:09.789253 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-04 01:15:09.789257 | orchestrator | Saturday 04 April 2026 01:15:08 +0000 (0:00:00.441) 0:00:16.274 ******** 2026-04-04 01:15:09.789260 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:09.789270 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:09.789274 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:09.789277 | orchestrator | 2026-04-04 01:15:09.789281 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-04 01:15:09.789285 | orchestrator | Saturday 04 April 2026 01:15:09 +0000 (0:00:00.330) 0:00:16.604 ******** 2026-04-04 01:15:09.789289 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.789292 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.789296 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.789300 | orchestrator | 2026-04-04 01:15:09.789304 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-04 01:15:09.789307 | orchestrator | Saturday 04 April 2026 01:15:09 +0000 (0:00:00.259) 0:00:16.864 ******** 2026-04-04 01:15:09.789311 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:09.789315 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:09.789319 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:09.789322 | orchestrator | 2026-04-04 01:15:09.789330 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-04 01:15:16.676992 | orchestrator | Saturday 04 April 2026 01:15:09 +0000 (0:00:00.434) 0:00:17.299 ******** 2026-04-04 01:15:16.677036 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:16.677041 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:16.677044 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:16.677048 | orchestrator | 2026-04-04 01:15:16.677052 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-04 01:15:16.677055 | orchestrator | Saturday 04 April 2026 01:15:10 +0000 (0:00:00.535) 0:00:17.834 ******** 2026-04-04 01:15:16.677058 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:16.677061 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:16.677064 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:16.677067 | orchestrator | 2026-04-04 01:15:16.677071 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-04 01:15:16.677074 | orchestrator | Saturday 04 April 2026 01:15:10 +0000 (0:00:00.499) 0:00:18.333 ******** 2026-04-04 01:15:16.677077 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:16.677080 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:16.677083 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:16.677086 | orchestrator | 2026-04-04 01:15:16.677089 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-04 01:15:16.677092 | orchestrator | Saturday 04 April 2026 01:15:11 +0000 (0:00:00.283) 0:00:18.617 ******** 2026-04-04 01:15:16.677096 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:16.677099 | orchestrator | skipping: [testbed-node-4] 2026-04-04 01:15:16.677102 | orchestrator | skipping: [testbed-node-5] 2026-04-04 01:15:16.677105 | orchestrator | 2026-04-04 01:15:16.677108 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-04 01:15:16.677111 | orchestrator | Saturday 04 April 2026 01:15:11 +0000 (0:00:00.427) 0:00:19.044 ******** 2026-04-04 01:15:16.677115 | orchestrator | ok: [testbed-node-3] 2026-04-04 01:15:16.677118 | orchestrator | ok: [testbed-node-4] 2026-04-04 01:15:16.677121 | orchestrator | ok: [testbed-node-5] 2026-04-04 01:15:16.677124 | orchestrator | 2026-04-04 01:15:16.677127 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-04 01:15:16.677130 | orchestrator | Saturday 04 April 2026 01:15:11 +0000 (0:00:00.298) 0:00:19.343 ******** 2026-04-04 01:15:16.677133 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:15:16.677137 | orchestrator | 2026-04-04 01:15:16.677140 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-04 01:15:16.677143 | orchestrator | Saturday 04 April 2026 01:15:12 +0000 (0:00:00.240) 0:00:19.584 ******** 2026-04-04 01:15:16.677155 | orchestrator | skipping: [testbed-node-3] 2026-04-04 01:15:16.677158 | orchestrator | 2026-04-04 01:15:16.677161 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-04 01:15:16.677164 | orchestrator | Saturday 04 April 2026 01:15:12 +0000 (0:00:00.233) 0:00:19.817 ******** 2026-04-04 01:15:16.677167 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:15:16.677170 | orchestrator | 2026-04-04 01:15:16.677173 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-04 01:15:16.677176 | orchestrator | Saturday 04 April 2026 01:15:13 +0000 (0:00:01.632) 0:00:21.450 ******** 2026-04-04 01:15:16.677179 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:15:16.677182 | orchestrator | 2026-04-04 01:15:16.677185 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-04 01:15:16.677188 | orchestrator | Saturday 04 April 2026 01:15:14 +0000 (0:00:00.240) 0:00:21.690 ******** 2026-04-04 01:15:16.677191 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:15:16.677194 | orchestrator | 2026-04-04 01:15:16.677197 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:16.677200 | orchestrator | Saturday 04 April 2026 01:15:14 +0000 (0:00:00.283) 0:00:21.973 ******** 2026-04-04 01:15:16.677204 | orchestrator | 2026-04-04 01:15:16.677207 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:16.677210 | orchestrator | Saturday 04 April 2026 01:15:14 +0000 (0:00:00.068) 0:00:22.042 ******** 2026-04-04 01:15:16.677213 | orchestrator | 2026-04-04 01:15:16.677216 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-04 01:15:16.677219 | orchestrator | Saturday 04 April 2026 01:15:14 +0000 (0:00:00.201) 0:00:22.244 ******** 2026-04-04 01:15:16.677222 | orchestrator | 2026-04-04 01:15:16.677225 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-04 01:15:16.677228 | orchestrator | Saturday 04 April 2026 01:15:14 +0000 (0:00:00.069) 0:00:22.313 ******** 2026-04-04 01:15:16.677232 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-04 01:15:16.677235 | orchestrator | 2026-04-04 01:15:16.677238 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-04 01:15:16.677241 | orchestrator | Saturday 04 April 2026 01:15:16 +0000 (0:00:01.219) 0:00:23.533 ******** 2026-04-04 01:15:16.677244 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-04 01:15:16.677247 | orchestrator |  "msg": [ 2026-04-04 01:15:16.677250 | orchestrator |  "Validator run completed.", 2026-04-04 01:15:16.677253 | orchestrator |  "You can find the report file here:", 2026-04-04 01:15:16.677256 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-04T01:14:53+00:00-report.json", 2026-04-04 01:15:16.677260 | orchestrator |  "on the following host:", 2026-04-04 01:15:16.677263 | orchestrator |  "testbed-manager" 2026-04-04 01:15:16.677266 | orchestrator |  ] 2026-04-04 01:15:16.677269 | orchestrator | } 2026-04-04 01:15:16.677272 | orchestrator | 2026-04-04 01:15:16.677275 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:15:16.677279 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-04 01:15:16.677283 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:15:16.677293 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-04 01:15:16.677296 | orchestrator | 2026-04-04 01:15:16.677299 | orchestrator | 2026-04-04 01:15:16.677303 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:15:16.677328 | orchestrator | Saturday 04 April 2026 01:15:16 +0000 (0:00:00.396) 0:00:23.930 ******** 2026-04-04 01:15:16.677332 | orchestrator | =============================================================================== 2026-04-04 01:15:16.677335 | orchestrator | Get ceph osd tree ------------------------------------------------------- 2.18s 2026-04-04 01:15:16.677338 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 1.78s 2026-04-04 01:15:16.677341 | orchestrator | Aggregate test results step one ----------------------------------------- 1.63s 2026-04-04 01:15:16.677344 | orchestrator | Write report file ------------------------------------------------------- 1.22s 2026-04-04 01:15:16.677347 | orchestrator | Get timestamp for report file ------------------------------------------- 0.98s 2026-04-04 01:15:16.677350 | orchestrator | Create report output directory ------------------------------------------ 0.67s 2026-04-04 01:15:16.677353 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.61s 2026-04-04 01:15:16.677356 | orchestrator | Print report file information ------------------------------------------- 0.58s 2026-04-04 01:15:16.677359 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2026-04-04 01:15:16.677363 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.50s 2026-04-04 01:15:16.677366 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2026-04-04 01:15:16.677369 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.45s 2026-04-04 01:15:16.677372 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.44s 2026-04-04 01:15:16.677375 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.43s 2026-04-04 01:15:16.677379 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2026-04-04 01:15:16.677385 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.43s 2026-04-04 01:15:16.677390 | orchestrator | Prepare test data ------------------------------------------------------- 0.42s 2026-04-04 01:15:16.677394 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.42s 2026-04-04 01:15:16.677399 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-04 01:15:16.677404 | orchestrator | Flush handlers ---------------------------------------------------------- 0.34s 2026-04-04 01:15:16.865080 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-04 01:15:16.874316 | orchestrator | + set -e 2026-04-04 01:15:16.875324 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:15:16.875379 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:15:16.875393 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:15:16.875401 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:15:16.875410 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:15:16.875419 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:15:16.875428 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:15:16.875436 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:15:16.875444 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:15:16.875453 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 01:15:16.875461 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 01:15:16.875468 | orchestrator | ++ export ARA=false 2026-04-04 01:15:16.875476 | orchestrator | ++ ARA=false 2026-04-04 01:15:16.875485 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:15:16.875496 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:15:16.875503 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:15:16.875510 | orchestrator | ++ TEMPEST=true 2026-04-04 01:15:16.875518 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:15:16.875526 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:15:16.875535 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:15:16.875543 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:15:16.875552 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:15:16.875560 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:15:16.875603 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:15:16.875612 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:15:16.875621 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:15:16.875629 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:15:16.875638 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:15:16.875665 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:15:16.875673 | orchestrator | + source /etc/os-release 2026-04-04 01:15:16.875682 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-04 01:15:16.875690 | orchestrator | ++ NAME=Ubuntu 2026-04-04 01:15:16.875698 | orchestrator | ++ VERSION_ID=24.04 2026-04-04 01:15:16.875707 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-04 01:15:16.875715 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-04 01:15:16.875723 | orchestrator | ++ ID=ubuntu 2026-04-04 01:15:16.875731 | orchestrator | ++ ID_LIKE=debian 2026-04-04 01:15:16.875740 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-04 01:15:16.875748 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-04 01:15:16.875757 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-04 01:15:16.875765 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-04 01:15:16.875774 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-04 01:15:16.875782 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-04 01:15:16.875791 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-04 01:15:16.875809 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-04 01:15:16.875822 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-04 01:15:16.907175 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-04 01:15:39.870847 | orchestrator | 2026-04-04 01:15:39.870918 | orchestrator | # Status of Elasticsearch 2026-04-04 01:15:39.870925 | orchestrator | 2026-04-04 01:15:39.870930 | orchestrator | + pushd /opt/configuration/contrib 2026-04-04 01:15:39.870935 | orchestrator | + echo 2026-04-04 01:15:39.870940 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-04 01:15:39.870943 | orchestrator | + echo 2026-04-04 01:15:39.870948 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-04 01:15:40.019790 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-04 01:15:40.019878 | orchestrator | 2026-04-04 01:15:40.019890 | orchestrator | # Status of MariaDB 2026-04-04 01:15:40.019899 | orchestrator | 2026-04-04 01:15:40.019906 | orchestrator | + echo 2026-04-04 01:15:40.019913 | orchestrator | + echo '# Status of MariaDB' 2026-04-04 01:15:40.019919 | orchestrator | + echo 2026-04-04 01:15:40.020911 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 01:15:40.052512 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 01:15:40.052623 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:15:40.052634 | orchestrator | + osism status database 2026-04-04 01:15:41.657140 | orchestrator | 2026-04-04 01:15:41 | ERROR  | Unable to get ansible vault password 2026-04-04 01:15:41.657259 | orchestrator | 2026-04-04 01:15:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:15:41.657272 | orchestrator | 2026-04-04 01:15:41 | ERROR  | Dropping encrypted entries 2026-04-04 01:15:41.689940 | orchestrator | 2026-04-04 01:15:41 | INFO  | Connecting to MariaDB at 192.168.16.9 as root_shard_0... 2026-04-04 01:15:41.705815 | orchestrator | 2026-04-04 01:15:41 | INFO  | Cluster Status: Primary 2026-04-04 01:15:41.705960 | orchestrator | 2026-04-04 01:15:41 | INFO  | Connected: ON 2026-04-04 01:15:41.705970 | orchestrator | 2026-04-04 01:15:41 | INFO  | Ready: ON 2026-04-04 01:15:41.705975 | orchestrator | 2026-04-04 01:15:41 | INFO  | Cluster Size: 3 2026-04-04 01:15:41.705980 | orchestrator | 2026-04-04 01:15:41 | INFO  | Local State: Synced 2026-04-04 01:15:41.705985 | orchestrator | 2026-04-04 01:15:41 | INFO  | Cluster State UUID: a33f1a47-2fc0-11f1-ad2c-5bccac489604 2026-04-04 01:15:41.705991 | orchestrator | 2026-04-04 01:15:41 | INFO  | Cluster Members: 192.168.16.11:3306,192.168.16.12:3306,192.168.16.10:3306 2026-04-04 01:15:41.705997 | orchestrator | 2026-04-04 01:15:41 | INFO  | Galera Version: 26.4.25(r7387a566) 2026-04-04 01:15:41.706063 | orchestrator | 2026-04-04 01:15:41 | INFO  | Local Node UUID: d6ea3fa0-2fc0-11f1-9502-57e9058272e5 2026-04-04 01:15:41.706079 | orchestrator | 2026-04-04 01:15:41 | INFO  | Flow Control Paused: 0.00% 2026-04-04 01:15:41.706084 | orchestrator | 2026-04-04 01:15:41 | INFO  | Recv Queue Avg: 0 2026-04-04 01:15:41.706088 | orchestrator | 2026-04-04 01:15:41 | INFO  | Send Queue Avg: 0.000913242 2026-04-04 01:15:41.706092 | orchestrator | 2026-04-04 01:15:41 | INFO  | Transactions: 4327 local commits, 6512 replicated, 82 received 2026-04-04 01:15:41.706096 | orchestrator | 2026-04-04 01:15:41 | INFO  | Conflicts: 0 cert failures, 0 bf aborts 2026-04-04 01:15:41.706099 | orchestrator | 2026-04-04 01:15:41 | INFO  | MariaDB Uptime: 21 minutes, 10 seconds 2026-04-04 01:15:41.706103 | orchestrator | 2026-04-04 01:15:41 | INFO  | Threads: 133 connected, 1 running 2026-04-04 01:15:41.706108 | orchestrator | 2026-04-04 01:15:41 | INFO  | Queries: 202928 total, 0 slow 2026-04-04 01:15:41.706112 | orchestrator | 2026-04-04 01:15:41 | INFO  | Aborted Connects: 145 2026-04-04 01:15:41.706267 | orchestrator | 2026-04-04 01:15:41 | INFO  | MariaDB Galera Cluster validation PASSED 2026-04-04 01:15:41.916740 | orchestrator | 2026-04-04 01:15:41.916842 | orchestrator | # Status of Prometheus 2026-04-04 01:15:41.916852 | orchestrator | 2026-04-04 01:15:41.916861 | orchestrator | + echo 2026-04-04 01:15:41.916871 | orchestrator | + echo '# Status of Prometheus' 2026-04-04 01:15:41.916878 | orchestrator | + echo 2026-04-04 01:15:41.916885 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-04 01:15:41.978953 | orchestrator | Unauthorized 2026-04-04 01:15:41.981824 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-04 01:15:42.041374 | orchestrator | Unauthorized 2026-04-04 01:15:42.044821 | orchestrator | 2026-04-04 01:15:42.044900 | orchestrator | # Status of RabbitMQ 2026-04-04 01:15:42.044910 | orchestrator | 2026-04-04 01:15:42.044919 | orchestrator | + echo 2026-04-04 01:15:42.044926 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-04 01:15:42.044932 | orchestrator | + echo 2026-04-04 01:15:42.046298 | orchestrator | ++ semver latest 10.0.0-0 2026-04-04 01:15:42.101426 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-04 01:15:42.101493 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:15:42.101499 | orchestrator | + osism status messaging 2026-04-04 01:15:49.270103 | orchestrator | 2026-04-04 01:15:49 | ERROR  | Unable to get ansible vault password 2026-04-04 01:15:49.270170 | orchestrator | 2026-04-04 01:15:49 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:15:49.270180 | orchestrator | 2026-04-04 01:15:49 | ERROR  | Dropping encrypted entries 2026-04-04 01:15:49.304858 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Connecting to RabbitMQ Management API at 192.168.16.10:15672 as openstack... 2026-04-04 01:15:49.361299 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] RabbitMQ Version: 3.13.7 2026-04-04 01:15:49.361346 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Erlang Version: 26.2.5.15 2026-04-04 01:15:49.361428 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Cluster Name: rabbit@testbed-node-0 2026-04-04 01:15:49.361509 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Cluster Size: 3 2026-04-04 01:15:49.362037 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.362501 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.362543 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Partitions: None (healthy) 2026-04-04 01:15:49.363039 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:15:49.363095 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Messages: 231 total, 231 ready, 0 unacked 2026-04-04 01:15:49.363331 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Message Rates: 6.0/s publish, 5.6/s deliver 2026-04-04 01:15:49.363734 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Disk Free: 58.0 GB (limit: 0.0 GB) 2026-04-04 01:15:49.363751 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Memory Used: 0.18 GB (limit: 12.54 GB) 2026-04-04 01:15:49.365150 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] File Descriptors: 124/1024 2026-04-04 01:15:49.365183 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-0] Sockets: 76/832 2026-04-04 01:15:49.365190 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Connecting to RabbitMQ Management API at 192.168.16.11:15672 as openstack... 2026-04-04 01:15:49.421383 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] RabbitMQ Version: 3.13.7 2026-04-04 01:15:49.421526 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Erlang Version: 26.2.5.15 2026-04-04 01:15:49.421539 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Cluster Name: rabbit@testbed-node-1 2026-04-04 01:15:49.421546 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Cluster Size: 3 2026-04-04 01:15:49.421553 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.421581 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.421589 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Partitions: None (healthy) 2026-04-04 01:15:49.422361 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:15:49.422407 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Messages: 231 total, 231 ready, 0 unacked 2026-04-04 01:15:49.422464 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Message Rates: 6.0/s publish, 5.6/s deliver 2026-04-04 01:15:49.422864 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-04 01:15:49.423159 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-04 01:15:49.423168 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] File Descriptors: 113/1024 2026-04-04 01:15:49.423172 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-1] Sockets: 67/832 2026-04-04 01:15:49.423398 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Connecting to RabbitMQ Management API at 192.168.16.12:15672 as openstack... 2026-04-04 01:15:49.480944 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] RabbitMQ Version: 3.13.7 2026-04-04 01:15:49.481048 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Erlang Version: 26.2.5.15 2026-04-04 01:15:49.481057 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Cluster Name: rabbit@testbed-node-2 2026-04-04 01:15:49.481070 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Cluster Size: 3 2026-04-04 01:15:49.481239 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.481330 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Running Nodes: rabbit@testbed-node-0, rabbit@testbed-node-1, rabbit@testbed-node-2 2026-04-04 01:15:49.482185 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Partitions: None (healthy) 2026-04-04 01:15:49.482232 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Connections: 209, Channels: 208, Queues: 173 2026-04-04 01:15:49.483049 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Messages: 231 total, 231 ready, 0 unacked 2026-04-04 01:15:49.483084 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Message Rates: 6.0/s publish, 5.6/s deliver 2026-04-04 01:15:49.483092 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Disk Free: 58.3 GB (limit: 0.0 GB) 2026-04-04 01:15:49.483097 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Memory Used: 0.17 GB (limit: 12.54 GB) 2026-04-04 01:15:49.483101 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] File Descriptors: 112/1024 2026-04-04 01:15:49.483105 | orchestrator | 2026-04-04 01:15:49 | INFO  | [testbed-node-2] Sockets: 66/832 2026-04-04 01:15:49.483109 | orchestrator | 2026-04-04 01:15:49 | INFO  | RabbitMQ Cluster validation PASSED 2026-04-04 01:15:49.707278 | orchestrator | 2026-04-04 01:15:49.707328 | orchestrator | # Status of Redis 2026-04-04 01:15:49.707333 | orchestrator | 2026-04-04 01:15:49.707338 | orchestrator | + echo 2026-04-04 01:15:49.707342 | orchestrator | + echo '# Status of Redis' 2026-04-04 01:15:49.707346 | orchestrator | + echo 2026-04-04 01:15:49.707351 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-04 01:15:49.710647 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001381s;;;0.000000;10.000000 2026-04-04 01:15:49.710994 | orchestrator | + popd 2026-04-04 01:15:49.711085 | orchestrator | + echo 2026-04-04 01:15:49.711136 | orchestrator | 2026-04-04 01:15:49.711144 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-04 01:15:49.711192 | orchestrator | # Create backup of MariaDB database 2026-04-04 01:15:49.711305 | orchestrator | + echo 2026-04-04 01:15:49.711314 | orchestrator | 2026-04-04 01:15:49.711321 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-04 01:15:51.086057 | orchestrator | 2026-04-04 01:15:51 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-04 01:15:51.147652 | orchestrator | 2026-04-04 01:15:51 | INFO  | Task 4ee45807-e7ca-4524-b6f4-a369d25bbe59 (mariadb_backup) was prepared for execution. 2026-04-04 01:15:51.147710 | orchestrator | 2026-04-04 01:15:51 | INFO  | It takes a moment until task 4ee45807-e7ca-4524-b6f4-a369d25bbe59 (mariadb_backup) has been started and output is visible here. 2026-04-04 01:16:17.746368 | orchestrator | 2026-04-04 01:16:17.746429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-04 01:16:17.746439 | orchestrator | 2026-04-04 01:16:17.746447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-04 01:16:17.746454 | orchestrator | Saturday 04 April 2026 01:15:54 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-04 01:16:17.746461 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:17.746469 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:17.746476 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:17.746483 | orchestrator | 2026-04-04 01:16:17.746489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-04 01:16:17.746495 | orchestrator | Saturday 04 April 2026 01:15:54 +0000 (0:00:00.314) 0:00:00.546 ******** 2026-04-04 01:16:17.746502 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-04 01:16:17.746510 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-04 01:16:17.746516 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-04 01:16:17.746523 | orchestrator | 2026-04-04 01:16:17.746530 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-04 01:16:17.746537 | orchestrator | 2026-04-04 01:16:17.746558 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-04 01:16:17.746611 | orchestrator | Saturday 04 April 2026 01:15:54 +0000 (0:00:00.414) 0:00:00.961 ******** 2026-04-04 01:16:17.746619 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-04 01:16:17.746624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-04 01:16:17.746630 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-04 01:16:17.746636 | orchestrator | 2026-04-04 01:16:17.746642 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-04 01:16:17.746648 | orchestrator | Saturday 04 April 2026 01:15:55 +0000 (0:00:00.411) 0:00:01.372 ******** 2026-04-04 01:16:17.746656 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-04 01:16:17.746663 | orchestrator | 2026-04-04 01:16:17.746670 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-04 01:16:17.746677 | orchestrator | Saturday 04 April 2026 01:15:56 +0000 (0:00:00.654) 0:00:02.027 ******** 2026-04-04 01:16:17.746684 | orchestrator | ok: [testbed-node-1] 2026-04-04 01:16:17.746691 | orchestrator | ok: [testbed-node-0] 2026-04-04 01:16:17.746697 | orchestrator | ok: [testbed-node-2] 2026-04-04 01:16:17.746704 | orchestrator | 2026-04-04 01:16:17.746711 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-04 01:16:17.746718 | orchestrator | Saturday 04 April 2026 01:15:59 +0000 (0:00:03.133) 0:00:05.160 ******** 2026-04-04 01:16:17.746725 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:17.746732 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:17.746739 | orchestrator | changed: [testbed-node-0] 2026-04-04 01:16:17.746746 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-04 01:16:17.746753 | orchestrator | 2026-04-04 01:16:17.746760 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-04 01:16:17.746767 | orchestrator | skipping: no hosts matched 2026-04-04 01:16:17.746774 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-04 01:16:17.746781 | orchestrator | 2026-04-04 01:16:17.746788 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-04 01:16:17.746795 | orchestrator | skipping: no hosts matched 2026-04-04 01:16:17.746802 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-04 01:16:17.746809 | orchestrator | mariadb_bootstrap_restart 2026-04-04 01:16:17.746849 | orchestrator | 2026-04-04 01:16:17.746856 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-04 01:16:17.746863 | orchestrator | skipping: no hosts matched 2026-04-04 01:16:17.746870 | orchestrator | 2026-04-04 01:16:17.746877 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-04 01:16:17.746884 | orchestrator | 2026-04-04 01:16:17.746891 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-04 01:16:17.746898 | orchestrator | Saturday 04 April 2026 01:16:16 +0000 (0:00:17.848) 0:00:23.008 ******** 2026-04-04 01:16:17.746915 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:17.746922 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:17.746929 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:17.746936 | orchestrator | 2026-04-04 01:16:17.746943 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-04 01:16:17.746950 | orchestrator | Saturday 04 April 2026 01:16:17 +0000 (0:00:00.289) 0:00:23.298 ******** 2026-04-04 01:16:17.746959 | orchestrator | skipping: [testbed-node-0] 2026-04-04 01:16:17.746967 | orchestrator | skipping: [testbed-node-1] 2026-04-04 01:16:17.746975 | orchestrator | skipping: [testbed-node-2] 2026-04-04 01:16:17.746982 | orchestrator | 2026-04-04 01:16:17.746990 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:16:17.747000 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-04 01:16:17.747016 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:16:17.747024 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:16:17.747032 | orchestrator | 2026-04-04 01:16:17.747038 | orchestrator | 2026-04-04 01:16:17.747044 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:16:17.747051 | orchestrator | Saturday 04 April 2026 01:16:17 +0000 (0:00:00.211) 0:00:23.510 ******** 2026-04-04 01:16:17.747058 | orchestrator | =============================================================================== 2026-04-04 01:16:17.747067 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.85s 2026-04-04 01:16:17.747088 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.13s 2026-04-04 01:16:17.747098 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.65s 2026-04-04 01:16:17.747105 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-04 01:16:17.747114 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2026-04-04 01:16:17.747122 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-04-04 01:16:17.747129 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2026-04-04 01:16:17.747138 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.21s 2026-04-04 01:16:17.914815 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-04 01:16:17.922770 | orchestrator | + set -e 2026-04-04 01:16:17.922853 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-04 01:16:17.922866 | orchestrator | ++ export INTERACTIVE=false 2026-04-04 01:16:17.922962 | orchestrator | ++ INTERACTIVE=false 2026-04-04 01:16:17.922975 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-04 01:16:17.923927 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-04 01:16:17.923964 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-04 01:16:17.924367 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-04 01:16:17.930103 | orchestrator | 2026-04-04 01:16:17.930163 | orchestrator | # OpenStack endpoints 2026-04-04 01:16:17.930172 | orchestrator | 2026-04-04 01:16:17.930179 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:16:17.930186 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:16:17.930192 | orchestrator | + export OS_CLOUD=admin 2026-04-04 01:16:17.930199 | orchestrator | + OS_CLOUD=admin 2026-04-04 01:16:17.930205 | orchestrator | + echo 2026-04-04 01:16:17.930212 | orchestrator | + echo '# OpenStack endpoints' 2026-04-04 01:16:17.930219 | orchestrator | + echo 2026-04-04 01:16:17.930224 | orchestrator | + openstack endpoint list 2026-04-04 01:16:21.135631 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:16:21.135726 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-04 01:16:21.135736 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:16:21.135741 | orchestrator | | 0d46de407b954a6884c5e1834c6ce55e | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-04 01:16:21.135745 | orchestrator | | 1247873c5ad94c44a781a4d821408d3d | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-04 01:16:21.135761 | orchestrator | | 138992ab90df48ae88e48a4d39f1e05b | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-04 01:16:21.135765 | orchestrator | | 166b0d1b322b4a709bee8a04326b5c82 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-04 01:16:21.135784 | orchestrator | | 2d09e7e9011a4c9f80b04bdbeaf0a220 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-04 01:16:21.135788 | orchestrator | | 3574652b5c4e421e8934cae15bcfca50 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-04 01:16:21.135792 | orchestrator | | 4464492fff1445b39fbd84004769f5f1 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-04 01:16:21.135796 | orchestrator | | 47c4b99c92714598803dc27bf3482d34 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-04 01:16:21.135800 | orchestrator | | 5282402d641f4105874e57ed7db20767 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-04 01:16:21.135803 | orchestrator | | 69ea4914889d464d929e64bf2b50e7e5 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-04 01:16:21.135807 | orchestrator | | 8bc265341e2d4a0dabee363313ea2137 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-04 01:16:21.135811 | orchestrator | | 95fab410134b41e89512b768be9ff859 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-04 01:16:21.135815 | orchestrator | | 9c0e8d0916814baeac000765593ef23e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-04 01:16:21.135818 | orchestrator | | a03c16a4ee7346d69fb2503d951343b8 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-04 01:16:21.135822 | orchestrator | | ad0f8d74b30849eca3e058482a94ab50 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-04 01:16:21.135826 | orchestrator | | b00ac4a6e6c54fc9afc933d5508e5fe7 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-04 01:16:21.135830 | orchestrator | | b445bfed9b354911a12167486d550a48 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-04 01:16:21.135833 | orchestrator | | d1600e7d8a1a49e689445011734a527c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-04 01:16:21.135837 | orchestrator | | d437c98f158942618a43f436f5f3b16e | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-04 01:16:21.135841 | orchestrator | | ea27e962bb7f4c59bbf0561380d6bd0f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-04 01:16:21.135857 | orchestrator | | ec75cbbac7c442f8b130eab6ef2fc060 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-04 01:16:21.135862 | orchestrator | | f4d9c1b3cb6f4922a18dffa8d68f13e0 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-04 01:16:21.135865 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-04 01:16:21.353670 | orchestrator | 2026-04-04 01:16:21.353744 | orchestrator | # Cinder 2026-04-04 01:16:21.353771 | orchestrator | 2026-04-04 01:16:21.353776 | orchestrator | + echo 2026-04-04 01:16:21.353781 | orchestrator | + echo '# Cinder' 2026-04-04 01:16:21.353785 | orchestrator | + echo 2026-04-04 01:16:21.353789 | orchestrator | + openstack volume service list 2026-04-04 01:16:24.894677 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:24.894762 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-04 01:16:24.894773 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:24.894779 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-04T01:16:16.000000 | 2026-04-04 01:16:24.894805 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-04T01:16:17.000000 | 2026-04-04 01:16:24.894811 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-04T01:16:16.000000 | 2026-04-04 01:16:24.894817 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-04T01:16:16.000000 | 2026-04-04 01:16:24.894823 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-04T01:16:22.000000 | 2026-04-04 01:16:24.894829 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-04T01:16:23.000000 | 2026-04-04 01:16:24.894834 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-04T01:16:24.000000 | 2026-04-04 01:16:24.894841 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-04T01:16:17.000000 | 2026-04-04 01:16:24.894847 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-04T01:16:17.000000 | 2026-04-04 01:16:24.894853 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:25.132908 | orchestrator | 2026-04-04 01:16:25.133015 | orchestrator | # Neutron 2026-04-04 01:16:25.133026 | orchestrator | 2026-04-04 01:16:25.133033 | orchestrator | + echo 2026-04-04 01:16:25.133039 | orchestrator | + echo '# Neutron' 2026-04-04 01:16:25.133047 | orchestrator | + echo 2026-04-04 01:16:25.133053 | orchestrator | + openstack network agent list 2026-04-04 01:16:27.699948 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:16:27.699999 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-04 01:16:27.700005 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:16:27.700009 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700013 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700017 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700020 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700024 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700028 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-04 01:16:27.700032 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:16:27.700048 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:16:27.700058 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-04 01:16:27.700066 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-04 01:16:27.933636 | orchestrator | + openstack network service provider list 2026-04-04 01:16:30.481483 | orchestrator | +---------------+------+---------+ 2026-04-04 01:16:30.481632 | orchestrator | | Service Type | Name | Default | 2026-04-04 01:16:30.481645 | orchestrator | +---------------+------+---------+ 2026-04-04 01:16:30.481651 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-04 01:16:30.481659 | orchestrator | +---------------+------+---------+ 2026-04-04 01:16:30.723951 | orchestrator | 2026-04-04 01:16:30.724021 | orchestrator | # Nova 2026-04-04 01:16:30.724027 | orchestrator | 2026-04-04 01:16:30.724032 | orchestrator | + echo 2026-04-04 01:16:30.724036 | orchestrator | + echo '# Nova' 2026-04-04 01:16:30.724040 | orchestrator | + echo 2026-04-04 01:16:30.724045 | orchestrator | + openstack compute service list 2026-04-04 01:16:33.410167 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:33.410242 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-04 01:16:33.410254 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:33.410263 | orchestrator | | c282e207-7b9d-4413-98c1-9c74294d435f | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-04T01:16:29.000000 | 2026-04-04 01:16:33.410269 | orchestrator | | 6c948bb4-bc83-4ce8-b683-93f15d8b3ba1 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-04T01:16:25.000000 | 2026-04-04 01:16:33.410275 | orchestrator | | aa357262-5f89-4734-91ba-a207ce69e910 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-04T01:16:31.000000 | 2026-04-04 01:16:33.410297 | orchestrator | | e0cfee18-e54c-47bb-8c99-4af0d4f07582 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-04T01:16:29.000000 | 2026-04-04 01:16:33.410304 | orchestrator | | bfdb7642-c738-4a1c-8229-bc8073d9648b | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-04T01:16:29.000000 | 2026-04-04 01:16:33.410310 | orchestrator | | 644728d1-0ec9-40d7-bd10-c9364e2283ce | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-04T01:16:32.000000 | 2026-04-04 01:16:33.410373 | orchestrator | | 256a3e3d-2482-4bc3-990b-10ca7f853cdf | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-04T01:16:28.000000 | 2026-04-04 01:16:33.410383 | orchestrator | | 31025086-81ad-4b77-b633-10554036f69c | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-04T01:16:28.000000 | 2026-04-04 01:16:33.410388 | orchestrator | | 19a2791f-fafd-484b-81ff-0531f40e689e | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-04T01:16:28.000000 | 2026-04-04 01:16:33.410392 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-04 01:16:33.652791 | orchestrator | + openstack hypervisor list 2026-04-04 01:16:36.181228 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:16:36.181319 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-04 01:16:36.181330 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:16:36.181337 | orchestrator | | 246e88e5-7005-4685-9d47-92f76ef19d20 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-04 01:16:36.181343 | orchestrator | | 9f2a30e0-cc4c-4191-b33e-473e50bab9c3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-04 01:16:36.181349 | orchestrator | | 7da88395-7789-47ff-8f51-669d67b0e2f6 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-04 01:16:36.181380 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-04 01:16:36.400779 | orchestrator | 2026-04-04 01:16:36.400848 | orchestrator | # Run OpenStack test play 2026-04-04 01:16:36.400855 | orchestrator | 2026-04-04 01:16:36.400860 | orchestrator | + echo 2026-04-04 01:16:36.400865 | orchestrator | + echo '# Run OpenStack test play' 2026-04-04 01:16:36.400870 | orchestrator | + echo 2026-04-04 01:16:36.400874 | orchestrator | + osism apply --environment openstack test 2026-04-04 01:16:37.662795 | orchestrator | 2026-04-04 01:16:37 | INFO  | Trying to run play test in environment openstack 2026-04-04 01:16:47.691976 | orchestrator | 2026-04-04 01:16:47 | INFO  | Prepare task for execution of test. 2026-04-04 01:16:47.774597 | orchestrator | 2026-04-04 01:16:47 | INFO  | Task 1a43bd70-ccb8-4be2-ba32-d3215a292729 (test) was prepared for execution. 2026-04-04 01:16:47.774681 | orchestrator | 2026-04-04 01:16:47 | INFO  | It takes a moment until task 1a43bd70-ccb8-4be2-ba32-d3215a292729 (test) has been started and output is visible here. 2026-04-04 01:20:01.932966 | orchestrator | 2026-04-04 01:20:01.933033 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-04 01:20:01.933042 | orchestrator | 2026-04-04 01:20:01.933049 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-04 01:20:01.933055 | orchestrator | Saturday 04 April 2026 01:16:50 +0000 (0:00:00.099) 0:00:00.099 ******** 2026-04-04 01:20:01.933062 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933068 | orchestrator | 2026-04-04 01:20:01.933074 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-04 01:20:01.933080 | orchestrator | Saturday 04 April 2026 01:16:54 +0000 (0:00:03.804) 0:00:03.904 ******** 2026-04-04 01:20:01.933087 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933092 | orchestrator | 2026-04-04 01:20:01.933099 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-04 01:20:01.933104 | orchestrator | Saturday 04 April 2026 01:16:59 +0000 (0:00:04.348) 0:00:08.252 ******** 2026-04-04 01:20:01.933110 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933115 | orchestrator | 2026-04-04 01:20:01.933122 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-04 01:20:01.933127 | orchestrator | Saturday 04 April 2026 01:17:05 +0000 (0:00:06.443) 0:00:14.696 ******** 2026-04-04 01:20:01.933134 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933140 | orchestrator | 2026-04-04 01:20:01.933146 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-04 01:20:01.933152 | orchestrator | Saturday 04 April 2026 01:17:09 +0000 (0:00:04.023) 0:00:18.719 ******** 2026-04-04 01:20:01.933158 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933165 | orchestrator | 2026-04-04 01:20:01.933171 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-04 01:20:01.933176 | orchestrator | Saturday 04 April 2026 01:17:13 +0000 (0:00:04.321) 0:00:23.041 ******** 2026-04-04 01:20:01.933182 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-04 01:20:01.933189 | orchestrator | changed: [localhost] => (item=member) 2026-04-04 01:20:01.933195 | orchestrator | changed: [localhost] => (item=creator) 2026-04-04 01:20:01.933201 | orchestrator | 2026-04-04 01:20:01.933207 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-04 01:20:01.933213 | orchestrator | Saturday 04 April 2026 01:17:25 +0000 (0:00:12.017) 0:00:35.059 ******** 2026-04-04 01:20:01.933219 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933224 | orchestrator | 2026-04-04 01:20:01.933231 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-04 01:20:01.933237 | orchestrator | Saturday 04 April 2026 01:17:30 +0000 (0:00:04.301) 0:00:39.361 ******** 2026-04-04 01:20:01.933243 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933249 | orchestrator | 2026-04-04 01:20:01.933256 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-04 01:20:01.933277 | orchestrator | Saturday 04 April 2026 01:17:35 +0000 (0:00:05.278) 0:00:44.640 ******** 2026-04-04 01:20:01.933283 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933290 | orchestrator | 2026-04-04 01:20:01.933295 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-04 01:20:01.933300 | orchestrator | Saturday 04 April 2026 01:17:39 +0000 (0:00:04.090) 0:00:48.730 ******** 2026-04-04 01:20:01.933306 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933311 | orchestrator | 2026-04-04 01:20:01.933317 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-04 01:20:01.933323 | orchestrator | Saturday 04 April 2026 01:17:43 +0000 (0:00:03.800) 0:00:52.531 ******** 2026-04-04 01:20:01.933328 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933334 | orchestrator | 2026-04-04 01:20:01.933341 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-04 01:20:01.933346 | orchestrator | Saturday 04 April 2026 01:17:47 +0000 (0:00:04.191) 0:00:56.723 ******** 2026-04-04 01:20:01.933352 | orchestrator | changed: [localhost] 2026-04-04 01:20:01.933359 | orchestrator | 2026-04-04 01:20:01.933365 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-04 01:20:01.933372 | orchestrator | Saturday 04 April 2026 01:17:51 +0000 (0:00:04.120) 0:01:00.844 ******** 2026-04-04 01:20:01.933379 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-04 01:20:01.933385 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-04 01:20:01.933390 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-04 01:20:01.933406 | orchestrator | 2026-04-04 01:20:01.933412 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-04 01:20:01.933418 | orchestrator | Saturday 04 April 2026 01:18:05 +0000 (0:00:14.053) 0:01:14.897 ******** 2026-04-04 01:20:01.933424 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-04 01:20:01.933430 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-04 01:20:01.933436 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-04 01:20:01.933442 | orchestrator | 2026-04-04 01:20:01.933447 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-04 01:20:01.933453 | orchestrator | Saturday 04 April 2026 01:18:22 +0000 (0:00:16.414) 0:01:31.311 ******** 2026-04-04 01:20:01.933534 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-04 01:20:01.933542 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-04 01:20:01.933546 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-04 01:20:01.933550 | orchestrator | 2026-04-04 01:20:01.933554 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-04 01:20:01.933558 | orchestrator | 2026-04-04 01:20:01.933562 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-04 01:20:01.933577 | orchestrator | Saturday 04 April 2026 01:18:55 +0000 (0:00:33.039) 0:02:04.351 ******** 2026-04-04 01:20:01.933581 | orchestrator | ok: [localhost] 2026-04-04 01:20:01.933585 | orchestrator | 2026-04-04 01:20:01.933589 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-04 01:20:01.933593 | orchestrator | Saturday 04 April 2026 01:18:58 +0000 (0:00:03.680) 0:02:08.031 ******** 2026-04-04 01:20:01.933607 | orchestrator | skipping: [localhost] 2026-04-04 01:20:01.933613 | orchestrator | 2026-04-04 01:20:01.933620 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-04 01:20:01.933626 | orchestrator | Saturday 04 April 2026 01:18:58 +0000 (0:00:00.050) 0:02:08.082 ******** 2026-04-04 01:20:01.933632 | orchestrator | skipping: [localhost] 2026-04-04 01:20:01.933639 | orchestrator | 2026-04-04 01:20:01.933644 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-04 01:20:01.933657 | orchestrator | Saturday 04 April 2026 01:18:58 +0000 (0:00:00.053) 0:02:08.136 ******** 2026-04-04 01:20:01.933663 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-04 01:20:01.933670 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-04 01:20:01.933676 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-04 01:20:01.933681 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-04 01:20:01.933688 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-04 01:20:01.933694 | orchestrator | skipping: [localhost] 2026-04-04 01:20:01.933701 | orchestrator | 2026-04-04 01:20:01.933707 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-04 01:20:01.933714 | orchestrator | Saturday 04 April 2026 01:18:59 +0000 (0:00:00.186) 0:02:08.322 ******** 2026-04-04 01:20:01.933720 | orchestrator | skipping: [localhost] 2026-04-04 01:20:01.933727 | orchestrator | 2026-04-04 01:20:01.933733 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-04 01:20:01.933741 | orchestrator | Saturday 04 April 2026 01:18:59 +0000 (0:00:00.145) 0:02:08.467 ******** 2026-04-04 01:20:01.933747 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:20:01.933754 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:20:01.933760 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:20:01.933766 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:20:01.933776 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:20:01.933782 | orchestrator | 2026-04-04 01:20:01.933789 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-04 01:20:01.933795 | orchestrator | Saturday 04 April 2026 01:19:03 +0000 (0:00:04.338) 0:02:12.806 ******** 2026-04-04 01:20:01.933801 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-04 01:20:01.933808 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-04 01:20:01.933814 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-04 01:20:01.933821 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-04 01:20:01.933829 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j805995349416.2759', 'results_file': '/ansible/.ansible_async/j805995349416.2759', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:20:01.933837 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-04 01:20:01.933841 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j170641309858.2784', 'results_file': '/ansible/.ansible_async/j170641309858.2784', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:20:01.933846 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j405887846468.2809', 'results_file': '/ansible/.ansible_async/j405887846468.2809', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:20:01.933849 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j362135298110.2834', 'results_file': '/ansible/.ansible_async/j362135298110.2834', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:20:01.933854 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j487978152939.2859', 'results_file': '/ansible/.ansible_async/j487978152939.2859', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:20:01.933861 | orchestrator | 2026-04-04 01:20:01.933864 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-04 01:20:01.933868 | orchestrator | Saturday 04 April 2026 01:20:00 +0000 (0:00:57.291) 0:03:10.098 ******** 2026-04-04 01:20:01.933876 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:21:13.295675 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:21:13.295778 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:21:13.295789 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:21:13.295796 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:21:13.295804 | orchestrator | 2026-04-04 01:21:13.295811 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-04 01:21:13.295817 | orchestrator | Saturday 04 April 2026 01:20:05 +0000 (0:00:04.587) 0:03:14.685 ******** 2026-04-04 01:21:13.295824 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-04 01:21:13.295834 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j746032200419.2970', 'results_file': '/ansible/.ansible_async/j746032200419.2970', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295844 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j385703008290.2995', 'results_file': '/ansible/.ansible_async/j385703008290.2995', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295850 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j670873715217.3020', 'results_file': '/ansible/.ansible_async/j670873715217.3020', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295856 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j479080971041.3045', 'results_file': '/ansible/.ansible_async/j479080971041.3045', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295880 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j152339129279.3070', 'results_file': '/ansible/.ansible_async/j152339129279.3070', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295887 | orchestrator | 2026-04-04 01:21:13.295893 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-04 01:21:13.295901 | orchestrator | Saturday 04 April 2026 01:20:14 +0000 (0:00:09.462) 0:03:24.148 ******** 2026-04-04 01:21:13.295905 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:21:13.295909 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:21:13.295913 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:21:13.295917 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:21:13.295921 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:21:13.295924 | orchestrator | 2026-04-04 01:21:13.295928 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-04 01:21:13.295932 | orchestrator | Saturday 04 April 2026 01:20:19 +0000 (0:00:04.615) 0:03:28.763 ******** 2026-04-04 01:21:13.295936 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-04 01:21:13.295956 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j169934657253.3139', 'results_file': '/ansible/.ansible_async/j169934657253.3139', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295961 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j972691859466.3164', 'results_file': '/ansible/.ansible_async/j972691859466.3164', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295967 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j188771393040.3190', 'results_file': '/ansible/.ansible_async/j188771393040.3190', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295974 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j421186791650.3216', 'results_file': '/ansible/.ansible_async/j421186791650.3216', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.295997 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j702351109241.3242', 'results_file': '/ansible/.ansible_async/j702351109241.3242', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-04 01:21:13.296006 | orchestrator | 2026-04-04 01:21:13.296012 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-04 01:21:13.296018 | orchestrator | Saturday 04 April 2026 01:20:28 +0000 (0:00:09.392) 0:03:38.155 ******** 2026-04-04 01:21:13.296023 | orchestrator | changed: [localhost] 2026-04-04 01:21:13.296030 | orchestrator | 2026-04-04 01:21:13.296036 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-04 01:21:13.296041 | orchestrator | Saturday 04 April 2026 01:20:35 +0000 (0:00:06.444) 0:03:44.600 ******** 2026-04-04 01:21:13.296046 | orchestrator | changed: [localhost] 2026-04-04 01:21:13.296052 | orchestrator | 2026-04-04 01:21:13.296057 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-04 01:21:13.296063 | orchestrator | Saturday 04 April 2026 01:20:48 +0000 (0:00:13.166) 0:03:57.766 ******** 2026-04-04 01:21:13.296070 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-04 01:21:13.296076 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-04 01:21:13.296082 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-04 01:21:13.296088 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-04 01:21:13.296094 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-04 01:21:13.296100 | orchestrator | 2026-04-04 01:21:13.296106 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-04 01:21:13.296112 | orchestrator | Saturday 04 April 2026 01:21:13 +0000 (0:00:24.441) 0:04:22.208 ******** 2026-04-04 01:21:13.296118 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-04 01:21:13.296124 | orchestrator |  "msg": "test: 192.168.112.161" 2026-04-04 01:21:13.296130 | orchestrator | } 2026-04-04 01:21:13.296137 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-04 01:21:13.296143 | orchestrator |  "msg": "test-1: 192.168.112.179" 2026-04-04 01:21:13.296149 | orchestrator | } 2026-04-04 01:21:13.296154 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-04 01:21:13.296160 | orchestrator |  "msg": "test-2: 192.168.112.131" 2026-04-04 01:21:13.296166 | orchestrator | } 2026-04-04 01:21:13.296172 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-04 01:21:13.296179 | orchestrator |  "msg": "test-3: 192.168.112.127" 2026-04-04 01:21:13.296184 | orchestrator | } 2026-04-04 01:21:13.296188 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-04 01:21:13.296199 | orchestrator |  "msg": "test-4: 192.168.112.118" 2026-04-04 01:21:13.296208 | orchestrator | } 2026-04-04 01:21:13.296212 | orchestrator | 2026-04-04 01:21:13.296216 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:21:13.296221 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-04 01:21:13.296228 | orchestrator | 2026-04-04 01:21:13.296234 | orchestrator | 2026-04-04 01:21:13.296244 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:21:13.296251 | orchestrator | Saturday 04 April 2026 01:21:13 +0000 (0:00:00.112) 0:04:22.321 ******** 2026-04-04 01:21:13.296257 | orchestrator | =============================================================================== 2026-04-04 01:21:13.296263 | orchestrator | Wait for instance creation to complete --------------------------------- 57.29s 2026-04-04 01:21:13.296269 | orchestrator | Create test routers ---------------------------------------------------- 33.04s 2026-04-04 01:21:13.296275 | orchestrator | Create floating ip addresses ------------------------------------------- 24.44s 2026-04-04 01:21:13.296281 | orchestrator | Create test subnets ---------------------------------------------------- 16.41s 2026-04-04 01:21:13.296286 | orchestrator | Create test networks --------------------------------------------------- 14.05s 2026-04-04 01:21:13.296292 | orchestrator | Attach test volume ----------------------------------------------------- 13.17s 2026-04-04 01:21:13.296298 | orchestrator | Add member roles to user test ------------------------------------------ 12.02s 2026-04-04 01:21:13.296304 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.46s 2026-04-04 01:21:13.296309 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.39s 2026-04-04 01:21:13.296315 | orchestrator | Create test volume ------------------------------------------------------ 6.44s 2026-04-04 01:21:13.296322 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.44s 2026-04-04 01:21:13.296328 | orchestrator | Create ssh security group ----------------------------------------------- 5.28s 2026-04-04 01:21:13.296334 | orchestrator | Add tag to instances ---------------------------------------------------- 4.62s 2026-04-04 01:21:13.296340 | orchestrator | Add metadata to instances ----------------------------------------------- 4.59s 2026-04-04 01:21:13.296346 | orchestrator | Create test-admin user -------------------------------------------------- 4.35s 2026-04-04 01:21:13.296353 | orchestrator | Create test instances --------------------------------------------------- 4.34s 2026-04-04 01:21:13.296359 | orchestrator | Create test user -------------------------------------------------------- 4.32s 2026-04-04 01:21:13.296365 | orchestrator | Create test server group ------------------------------------------------ 4.30s 2026-04-04 01:21:13.296371 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.19s 2026-04-04 01:21:13.296379 | orchestrator | Create test keypair ----------------------------------------------------- 4.12s 2026-04-04 01:21:13.456879 | orchestrator | + server_list 2026-04-04 01:21:13.456967 | orchestrator | + openstack --os-cloud test server list 2026-04-04 01:21:16.804401 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:21:16.804478 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-04 01:21:16.804483 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:21:16.804487 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | test-2=192.168.112.127, 192.168.201.158 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:21:16.804490 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | test-3=192.168.112.118, 192.168.202.100 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:21:16.804493 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | test-2=192.168.112.131, 192.168.201.131 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:21:16.804509 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | test-1=192.168.112.161, 192.168.200.87 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:21:16.804512 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | test-1=192.168.112.179, 192.168.200.189 | N/A (booted from volume) | SCS-1L-1 | 2026-04-04 01:21:16.804515 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-04 01:21:17.044634 | orchestrator | + openstack --os-cloud test server show test 2026-04-04 01:21:20.469922 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:20.470073 | orchestrator | | Field | Value | 2026-04-04 01:21:20.470087 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:20.470095 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:21:20.470103 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:21:20.470109 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:21:20.470116 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-04 01:21:20.470123 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:21:20.470145 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:21:20.470164 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:21:20.470171 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:21:20.470177 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:21:20.470183 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:21:20.470189 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:21:20.470195 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:21:20.470201 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:21:20.470207 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:21:20.470224 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:21:20.470231 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:19:36.000000 | 2026-04-04 01:21:20.470243 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:21:20.470250 | orchestrator | | accessIPv4 | | 2026-04-04 01:21:20.470262 | orchestrator | | accessIPv6 | | 2026-04-04 01:21:20.470269 | orchestrator | | addresses | test-1=192.168.112.161, 192.168.200.87 | 2026-04-04 01:21:20.470276 | orchestrator | | config_drive | | 2026-04-04 01:21:20.470282 | orchestrator | | created | 2026-04-04T01:19:08Z | 2026-04-04 01:21:20.470288 | orchestrator | | description | None | 2026-04-04 01:21:20.470294 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:21:20.470306 | orchestrator | | hostId | 459d05b915e80ddbbe2d3646cbe84a6c3c36a894fcb22416c5c0b546 | 2026-04-04 01:21:20.470312 | orchestrator | | host_status | None | 2026-04-04 01:21:20.470324 | orchestrator | | id | 588dea88-0e26-4241-9445-e3c230ca9c0b | 2026-04-04 01:21:20.470331 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:21:20.470341 | orchestrator | | key_name | test | 2026-04-04 01:21:20.470348 | orchestrator | | locked | False | 2026-04-04 01:21:20.470354 | orchestrator | | locked_reason | None | 2026-04-04 01:21:20.470360 | orchestrator | | name | test | 2026-04-04 01:21:20.470366 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:21:20.470377 | orchestrator | | progress | 0 | 2026-04-04 01:21:20.470383 | orchestrator | | project_id | bac82e2f26c346d9932b415aed484ce3 | 2026-04-04 01:21:20.470389 | orchestrator | | properties | hostname='test' | 2026-04-04 01:21:20.470400 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:21:20.470407 | orchestrator | | | name='ssh' | 2026-04-04 01:21:20.470439 | orchestrator | | server_groups | None | 2026-04-04 01:21:20.470445 | orchestrator | | status | ACTIVE | 2026-04-04 01:21:20.470451 | orchestrator | | tags | test | 2026-04-04 01:21:20.470457 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:21:20.470474 | orchestrator | | updated | 2026-04-04T01:20:06Z | 2026-04-04 01:21:20.470480 | orchestrator | | user_id | 32981bb47adc491eb16bc9171c885937 | 2026-04-04 01:21:20.470487 | orchestrator | | volumes_attached | delete_on_termination='True', id='cdfed855-f25b-4779-a42c-12889fbd2010' | 2026-04-04 01:21:20.470493 | orchestrator | | | delete_on_termination='False', id='457d657d-6f34-4594-97bb-d0bb742eaddf' | 2026-04-04 01:21:20.474744 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:20.736871 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-04 01:21:23.650125 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:23.650184 | orchestrator | | Field | Value | 2026-04-04 01:21:23.650194 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:23.650200 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:21:23.650216 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:21:23.650222 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:21:23.650228 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-04 01:21:23.650233 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:21:23.650239 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:21:23.650254 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:21:23.650264 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:21:23.650269 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:21:23.650275 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:21:23.650280 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:21:23.650290 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:21:23.650296 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:21:23.650302 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:21:23.650307 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:21:23.650313 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:19:38.000000 | 2026-04-04 01:21:23.650321 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:21:23.650326 | orchestrator | | accessIPv4 | | 2026-04-04 01:21:23.650331 | orchestrator | | accessIPv6 | | 2026-04-04 01:21:23.650336 | orchestrator | | addresses | test-1=192.168.112.179, 192.168.200.189 | 2026-04-04 01:21:23.650346 | orchestrator | | config_drive | | 2026-04-04 01:21:23.650351 | orchestrator | | created | 2026-04-04T01:19:08Z | 2026-04-04 01:21:23.650356 | orchestrator | | description | None | 2026-04-04 01:21:23.650361 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:21:23.650365 | orchestrator | | hostId | 459d05b915e80ddbbe2d3646cbe84a6c3c36a894fcb22416c5c0b546 | 2026-04-04 01:21:23.650370 | orchestrator | | host_status | None | 2026-04-04 01:21:23.650378 | orchestrator | | id | 760733d9-4799-478b-bdbd-302dde1eb789 | 2026-04-04 01:21:23.650384 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:21:23.650389 | orchestrator | | key_name | test | 2026-04-04 01:21:23.650397 | orchestrator | | locked | False | 2026-04-04 01:21:23.650401 | orchestrator | | locked_reason | None | 2026-04-04 01:21:23.650426 | orchestrator | | name | test-1 | 2026-04-04 01:21:23.650432 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:21:23.650436 | orchestrator | | progress | 0 | 2026-04-04 01:21:23.650441 | orchestrator | | project_id | bac82e2f26c346d9932b415aed484ce3 | 2026-04-04 01:21:23.650446 | orchestrator | | properties | hostname='test-1' | 2026-04-04 01:21:23.650454 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:21:23.650461 | orchestrator | | | name='ssh' | 2026-04-04 01:21:23.650469 | orchestrator | | server_groups | None | 2026-04-04 01:21:23.650475 | orchestrator | | status | ACTIVE | 2026-04-04 01:21:23.650483 | orchestrator | | tags | test | 2026-04-04 01:21:23.650497 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:21:23.650505 | orchestrator | | updated | 2026-04-04T01:20:07Z | 2026-04-04 01:21:23.650512 | orchestrator | | user_id | 32981bb47adc491eb16bc9171c885937 | 2026-04-04 01:21:23.650520 | orchestrator | | volumes_attached | delete_on_termination='True', id='f5d5f0d7-585a-46aa-8e2d-ee90a21a48b5' | 2026-04-04 01:21:23.654567 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:23.878838 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-04 01:21:27.255844 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:27.255974 | orchestrator | | Field | Value | 2026-04-04 01:21:27.255986 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:27.255992 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:21:27.255998 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:21:27.256004 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:21:27.256010 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-04 01:21:27.256015 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:21:27.256021 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:21:27.256041 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:21:27.256048 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:21:27.256069 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:21:27.256076 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:21:27.256081 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:21:27.256087 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:21:27.256096 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:21:27.256101 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:21:27.256106 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:21:27.256114 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:19:38.000000 | 2026-04-04 01:21:27.256126 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:21:27.256138 | orchestrator | | accessIPv4 | | 2026-04-04 01:21:27.256147 | orchestrator | | accessIPv6 | | 2026-04-04 01:21:27.256154 | orchestrator | | addresses | test-2=192.168.112.131, 192.168.201.131 | 2026-04-04 01:21:27.256160 | orchestrator | | config_drive | | 2026-04-04 01:21:27.256166 | orchestrator | | created | 2026-04-04T01:19:09Z | 2026-04-04 01:21:27.256171 | orchestrator | | description | None | 2026-04-04 01:21:27.256177 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:21:27.256183 | orchestrator | | hostId | 459d05b915e80ddbbe2d3646cbe84a6c3c36a894fcb22416c5c0b546 | 2026-04-04 01:21:27.256189 | orchestrator | | host_status | None | 2026-04-04 01:21:27.256205 | orchestrator | | id | 1dcd1e21-dcd7-40f0-9206-29421e510440 | 2026-04-04 01:21:27.256211 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:21:27.256217 | orchestrator | | key_name | test | 2026-04-04 01:21:27.256223 | orchestrator | | locked | False | 2026-04-04 01:21:27.256229 | orchestrator | | locked_reason | None | 2026-04-04 01:21:27.256234 | orchestrator | | name | test-2 | 2026-04-04 01:21:27.256240 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:21:27.256246 | orchestrator | | progress | 0 | 2026-04-04 01:21:27.256258 | orchestrator | | project_id | bac82e2f26c346d9932b415aed484ce3 | 2026-04-04 01:21:27.256268 | orchestrator | | properties | hostname='test-2' | 2026-04-04 01:21:27.256279 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:21:27.256294 | orchestrator | | | name='ssh' | 2026-04-04 01:21:27.256300 | orchestrator | | server_groups | None | 2026-04-04 01:21:27.256306 | orchestrator | | status | ACTIVE | 2026-04-04 01:21:27.256311 | orchestrator | | tags | test | 2026-04-04 01:21:27.256316 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:21:27.256322 | orchestrator | | updated | 2026-04-04T01:20:08Z | 2026-04-04 01:21:27.256328 | orchestrator | | user_id | 32981bb47adc491eb16bc9171c885937 | 2026-04-04 01:21:27.256334 | orchestrator | | volumes_attached | delete_on_termination='True', id='c72dd129-a58a-469e-a8b2-cc52450b7659' | 2026-04-04 01:21:27.260071 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:27.469638 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-04 01:21:30.337362 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:30.337580 | orchestrator | | Field | Value | 2026-04-04 01:21:30.337599 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:30.337606 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:21:30.337612 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:21:30.337617 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:21:30.337630 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-04 01:21:30.337637 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:21:30.337663 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:21:30.337685 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:21:30.337690 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:21:30.337699 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:21:30.337703 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:21:30.337707 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:21:30.337711 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:21:30.337715 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:21:30.337718 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:21:30.337727 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:21:30.337731 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:19:38.000000 | 2026-04-04 01:21:30.337738 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:21:30.337742 | orchestrator | | accessIPv4 | | 2026-04-04 01:21:30.337749 | orchestrator | | accessIPv6 | | 2026-04-04 01:21:30.337753 | orchestrator | | addresses | test-2=192.168.112.127, 192.168.201.158 | 2026-04-04 01:21:30.337757 | orchestrator | | config_drive | | 2026-04-04 01:21:30.337761 | orchestrator | | created | 2026-04-04T01:19:12Z | 2026-04-04 01:21:30.337765 | orchestrator | | description | None | 2026-04-04 01:21:30.337774 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:21:30.337778 | orchestrator | | hostId | ce0900f6227dcddad19d226c4e208d35f6dba5c9b53ba343b0392fe0 | 2026-04-04 01:21:30.337782 | orchestrator | | host_status | None | 2026-04-04 01:21:30.337790 | orchestrator | | id | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | 2026-04-04 01:21:30.337794 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:21:30.337800 | orchestrator | | key_name | test | 2026-04-04 01:21:30.337804 | orchestrator | | locked | False | 2026-04-04 01:21:30.337808 | orchestrator | | locked_reason | None | 2026-04-04 01:21:30.337812 | orchestrator | | name | test-3 | 2026-04-04 01:21:30.337820 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:21:30.337824 | orchestrator | | progress | 0 | 2026-04-04 01:21:30.337828 | orchestrator | | project_id | bac82e2f26c346d9932b415aed484ce3 | 2026-04-04 01:21:30.337831 | orchestrator | | properties | hostname='test-3' | 2026-04-04 01:21:30.337840 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:21:30.337844 | orchestrator | | | name='ssh' | 2026-04-04 01:21:30.337850 | orchestrator | | server_groups | None | 2026-04-04 01:21:30.337854 | orchestrator | | status | ACTIVE | 2026-04-04 01:21:30.337860 | orchestrator | | tags | test | 2026-04-04 01:21:30.337867 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:21:30.337878 | orchestrator | | updated | 2026-04-04T01:20:08Z | 2026-04-04 01:21:30.337884 | orchestrator | | user_id | 32981bb47adc491eb16bc9171c885937 | 2026-04-04 01:21:30.337891 | orchestrator | | volumes_attached | delete_on_termination='True', id='eafdd87a-ebc1-4150-9b58-4a8ea4f1c20b' | 2026-04-04 01:21:30.341629 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:30.609170 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-04 01:21:33.441986 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:33.442157 | orchestrator | | Field | Value | 2026-04-04 01:21:33.442171 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:33.442178 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-04 01:21:33.442186 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-04 01:21:33.442215 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-04 01:21:33.442224 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-04 01:21:33.442228 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-04 01:21:33.442232 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-04 01:21:33.442251 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-04 01:21:33.442599 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-04 01:21:33.442614 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-04 01:21:33.442620 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-04 01:21:33.442625 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-04 01:21:33.442636 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-04 01:21:33.442640 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-04 01:21:33.442643 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-04 01:21:33.442647 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-04 01:21:33.442651 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-04T01:19:38.000000 | 2026-04-04 01:21:33.442665 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-04 01:21:33.442669 | orchestrator | | accessIPv4 | | 2026-04-04 01:21:33.442673 | orchestrator | | accessIPv6 | | 2026-04-04 01:21:33.442677 | orchestrator | | addresses | test-3=192.168.112.118, 192.168.202.100 | 2026-04-04 01:21:33.442684 | orchestrator | | config_drive | | 2026-04-04 01:21:33.442687 | orchestrator | | created | 2026-04-04T01:19:11Z | 2026-04-04 01:21:33.442691 | orchestrator | | description | None | 2026-04-04 01:21:33.442696 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='True', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-04 01:21:33.442700 | orchestrator | | hostId | 459d05b915e80ddbbe2d3646cbe84a6c3c36a894fcb22416c5c0b546 | 2026-04-04 01:21:33.442706 | orchestrator | | host_status | None | 2026-04-04 01:21:33.442715 | orchestrator | | id | ab9136b0-9cdc-439d-90aa-fd8665daef3e | 2026-04-04 01:21:33.442719 | orchestrator | | image | N/A (booted from volume) | 2026-04-04 01:21:33.442723 | orchestrator | | key_name | test | 2026-04-04 01:21:33.442730 | orchestrator | | locked | False | 2026-04-04 01:21:33.442734 | orchestrator | | locked_reason | None | 2026-04-04 01:21:33.442738 | orchestrator | | name | test-4 | 2026-04-04 01:21:33.442742 | orchestrator | | pinned_availability_zone | None | 2026-04-04 01:21:33.442745 | orchestrator | | progress | 0 | 2026-04-04 01:21:33.442749 | orchestrator | | project_id | bac82e2f26c346d9932b415aed484ce3 | 2026-04-04 01:21:33.442755 | orchestrator | | properties | hostname='test-4' | 2026-04-04 01:21:33.442764 | orchestrator | | security_groups | name='icmp' | 2026-04-04 01:21:33.442768 | orchestrator | | | name='ssh' | 2026-04-04 01:21:33.442772 | orchestrator | | server_groups | None | 2026-04-04 01:21:33.442780 | orchestrator | | status | ACTIVE | 2026-04-04 01:21:33.442783 | orchestrator | | tags | test | 2026-04-04 01:21:33.442787 | orchestrator | | trusted_image_certificates | None | 2026-04-04 01:21:33.442791 | orchestrator | | updated | 2026-04-04T01:20:09Z | 2026-04-04 01:21:33.442795 | orchestrator | | user_id | 32981bb47adc491eb16bc9171c885937 | 2026-04-04 01:21:33.442799 | orchestrator | | volumes_attached | delete_on_termination='True', id='756b26d0-0e82-4fc1-8c1f-99cdd6efd1b4' | 2026-04-04 01:21:33.445623 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-04 01:21:33.702693 | orchestrator | + server_ping 2026-04-04 01:21:33.703632 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:21:33.703674 | orchestrator | ++ tr -d '\r' 2026-04-04 01:21:36.259334 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:21:36.259442 | orchestrator | + ping -c3 192.168.112.118 2026-04-04 01:21:36.272211 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-04 01:21:36.272294 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=6.42 ms 2026-04-04 01:21:37.269898 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=2.25 ms 2026-04-04 01:21:38.270661 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.41 ms 2026-04-04 01:21:38.270717 | orchestrator | 2026-04-04 01:21:38.270726 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-04 01:21:38.270733 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:21:38.270739 | orchestrator | rtt min/avg/max/mdev = 1.410/3.359/6.415/2.188 ms 2026-04-04 01:21:38.271359 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:21:38.271371 | orchestrator | + ping -c3 192.168.112.179 2026-04-04 01:21:38.280497 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-04 01:21:38.280543 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=5.09 ms 2026-04-04 01:21:39.278602 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.54 ms 2026-04-04 01:21:40.280926 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=2.09 ms 2026-04-04 01:21:40.281017 | orchestrator | 2026-04-04 01:21:40.281028 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-04 01:21:40.281037 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:21:40.281044 | orchestrator | rtt min/avg/max/mdev = 1.541/2.907/5.093/1.561 ms 2026-04-04 01:21:40.281532 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:21:40.281554 | orchestrator | + ping -c3 192.168.112.127 2026-04-04 01:21:40.293561 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-04-04 01:21:40.293655 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.25 ms 2026-04-04 01:21:41.290518 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.39 ms 2026-04-04 01:21:42.291836 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-04 01:21:42.291917 | orchestrator | 2026-04-04 01:21:42.291925 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-04-04 01:21:42.291931 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:21:42.291935 | orchestrator | rtt min/avg/max/mdev = 1.816/3.818/7.246/2.435 ms 2026-04-04 01:21:42.293039 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:21:42.293073 | orchestrator | + ping -c3 192.168.112.161 2026-04-04 01:21:42.303722 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-04-04 01:21:42.303803 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=5.98 ms 2026-04-04 01:21:43.302010 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.79 ms 2026-04-04 01:21:44.303768 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-04 01:21:44.303835 | orchestrator | 2026-04-04 01:21:44.303842 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-04-04 01:21:44.303848 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:21:44.303853 | orchestrator | rtt min/avg/max/mdev = 1.840/3.536/5.982/1.772 ms 2026-04-04 01:21:44.303858 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:21:44.303863 | orchestrator | + ping -c3 192.168.112.131 2026-04-04 01:21:44.323272 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-04 01:21:44.323342 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=9.30 ms 2026-04-04 01:21:45.317940 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.40 ms 2026-04-04 01:21:46.319165 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-04 01:21:46.319263 | orchestrator | 2026-04-04 01:21:46.319274 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-04 01:21:46.319282 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:21:46.319288 | orchestrator | rtt min/avg/max/mdev = 1.897/4.531/9.299/3.377 ms 2026-04-04 01:21:46.319751 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-04-04 01:21:46.320022 | orchestrator | + compute_list 2026-04-04 01:21:46.320057 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:21:47.929142 | orchestrator | 2026-04-04 01:21:47 | ERROR  | Unable to get ansible vault password 2026-04-04 01:21:47.929257 | orchestrator | 2026-04-04 01:21:47 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:21:47.929270 | orchestrator | 2026-04-04 01:21:47 | ERROR  | Dropping encrypted entries 2026-04-04 01:21:51.025188 | orchestrator | +------+--------+----------+ 2026-04-04 01:21:51.025273 | orchestrator | | ID | Name | Status | 2026-04-04 01:21:51.025281 | orchestrator | |------+--------+----------| 2026-04-04 01:21:51.025287 | orchestrator | +------+--------+----------+ 2026-04-04 01:21:51.316497 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:21:52.831679 | orchestrator | 2026-04-04 01:21:52 | ERROR  | Unable to get ansible vault password 2026-04-04 01:21:52.831740 | orchestrator | 2026-04-04 01:21:52 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:21:52.831750 | orchestrator | 2026-04-04 01:21:52 | ERROR  | Dropping encrypted entries 2026-04-04 01:21:54.362738 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:21:54.362846 | orchestrator | | ID | Name | Status | 2026-04-04 01:21:54.362867 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:21:54.362881 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | 2026-04-04 01:21:54.362895 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | 2026-04-04 01:21:54.362909 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | 2026-04-04 01:21:54.362919 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | 2026-04-04 01:21:54.362928 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:21:54.641652 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:21:56.232842 | orchestrator | 2026-04-04 01:21:56 | ERROR  | Unable to get ansible vault password 2026-04-04 01:21:56.232913 | orchestrator | 2026-04-04 01:21:56 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:21:56.232920 | orchestrator | 2026-04-04 01:21:56 | ERROR  | Dropping encrypted entries 2026-04-04 01:21:58.208166 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:21:58.208286 | orchestrator | | ID | Name | Status | 2026-04-04 01:21:58.208300 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:21:58.208307 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | 2026-04-04 01:21:58.208313 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:21:58.496533 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2026-04-04 01:22:00.047229 | orchestrator | 2026-04-04 01:22:00 | ERROR  | Unable to get ansible vault password 2026-04-04 01:22:00.047485 | orchestrator | 2026-04-04 01:22:00 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:22:00.047515 | orchestrator | 2026-04-04 01:22:00 | ERROR  | Dropping encrypted entries 2026-04-04 01:22:01.602093 | orchestrator | 2026-04-04 01:22:01 | INFO  | Live migrating server ab9136b0-9cdc-439d-90aa-fd8665daef3e 2026-04-04 01:22:15.435504 | orchestrator | 2026-04-04 01:22:15 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:17.837864 | orchestrator | 2026-04-04 01:22:17 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:20.340749 | orchestrator | 2026-04-04 01:22:20 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:22.768486 | orchestrator | 2026-04-04 01:22:22 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:25.272815 | orchestrator | 2026-04-04 01:22:25 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:27.854180 | orchestrator | 2026-04-04 01:22:27 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:30.141002 | orchestrator | 2026-04-04 01:22:30 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:32.461458 | orchestrator | 2026-04-04 01:22:32 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:34.766523 | orchestrator | 2026-04-04 01:22:34 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:37.036875 | orchestrator | 2026-04-04 01:22:37 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:39.290173 | orchestrator | 2026-04-04 01:22:39 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:41.593032 | orchestrator | 2026-04-04 01:22:41 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:22:43.983310 | orchestrator | 2026-04-04 01:22:43 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) completed with status ACTIVE 2026-04-04 01:22:43.983438 | orchestrator | 2026-04-04 01:22:43 | INFO  | Live migrating server 1dcd1e21-dcd7-40f0-9206-29421e510440 2026-04-04 01:22:55.701336 | orchestrator | 2026-04-04 01:22:55 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:22:58.078484 | orchestrator | 2026-04-04 01:22:58 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:00.500442 | orchestrator | 2026-04-04 01:23:00 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:02.722330 | orchestrator | 2026-04-04 01:23:02 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:04.929418 | orchestrator | 2026-04-04 01:23:04 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:07.224650 | orchestrator | 2026-04-04 01:23:07 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:09.454302 | orchestrator | 2026-04-04 01:23:09 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:11.724879 | orchestrator | 2026-04-04 01:23:11 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:23:14.090624 | orchestrator | 2026-04-04 01:23:14 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) completed with status ACTIVE 2026-04-04 01:23:14.090705 | orchestrator | 2026-04-04 01:23:14 | INFO  | Live migrating server 588dea88-0e26-4241-9445-e3c230ca9c0b 2026-04-04 01:23:24.893042 | orchestrator | 2026-04-04 01:23:24 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:27.169753 | orchestrator | 2026-04-04 01:23:27 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:29.475483 | orchestrator | 2026-04-04 01:23:29 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:31.779209 | orchestrator | 2026-04-04 01:23:31 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:34.063471 | orchestrator | 2026-04-04 01:23:34 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:36.364226 | orchestrator | 2026-04-04 01:23:36 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:38.608905 | orchestrator | 2026-04-04 01:23:38 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:40.822914 | orchestrator | 2026-04-04 01:23:40 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:43.061186 | orchestrator | 2026-04-04 01:23:43 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:45.359211 | orchestrator | 2026-04-04 01:23:45 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:23:47.628702 | orchestrator | 2026-04-04 01:23:47 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) completed with status ACTIVE 2026-04-04 01:23:47.628785 | orchestrator | 2026-04-04 01:23:47 | INFO  | Live migrating server 760733d9-4799-478b-bdbd-302dde1eb789 2026-04-04 01:23:59.732041 | orchestrator | 2026-04-04 01:23:59 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:02.109823 | orchestrator | 2026-04-04 01:24:02 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:04.393277 | orchestrator | 2026-04-04 01:24:04 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:06.771549 | orchestrator | 2026-04-04 01:24:06 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:09.218404 | orchestrator | 2026-04-04 01:24:09 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:11.775696 | orchestrator | 2026-04-04 01:24:11 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:14.199170 | orchestrator | 2026-04-04 01:24:14 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:16.414246 | orchestrator | 2026-04-04 01:24:16 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:24:18.658009 | orchestrator | 2026-04-04 01:24:18 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) completed with status ACTIVE 2026-04-04 01:24:18.938313 | orchestrator | + compute_list 2026-04-04 01:24:18.938367 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:24:20.569929 | orchestrator | 2026-04-04 01:24:20 | ERROR  | Unable to get ansible vault password 2026-04-04 01:24:20.570074 | orchestrator | 2026-04-04 01:24:20 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:24:20.570116 | orchestrator | 2026-04-04 01:24:20 | ERROR  | Dropping encrypted entries 2026-04-04 01:24:22.303755 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:24:22.303842 | orchestrator | | ID | Name | Status | 2026-04-04 01:24:22.303849 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:24:22.303855 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | 2026-04-04 01:24:22.303859 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | 2026-04-04 01:24:22.303863 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | 2026-04-04 01:24:22.303868 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | 2026-04-04 01:24:22.303896 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:24:22.563050 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:24:24.101974 | orchestrator | 2026-04-04 01:24:24 | ERROR  | Unable to get ansible vault password 2026-04-04 01:24:24.102085 | orchestrator | 2026-04-04 01:24:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:24:24.102097 | orchestrator | 2026-04-04 01:24:24 | ERROR  | Dropping encrypted entries 2026-04-04 01:24:25.332426 | orchestrator | +------+--------+----------+ 2026-04-04 01:24:25.332530 | orchestrator | | ID | Name | Status | 2026-04-04 01:24:25.332539 | orchestrator | |------+--------+----------| 2026-04-04 01:24:25.332546 | orchestrator | +------+--------+----------+ 2026-04-04 01:24:25.595782 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:24:27.146418 | orchestrator | 2026-04-04 01:24:27 | ERROR  | Unable to get ansible vault password 2026-04-04 01:24:27.146463 | orchestrator | 2026-04-04 01:24:27 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:24:27.146470 | orchestrator | 2026-04-04 01:24:27 | ERROR  | Dropping encrypted entries 2026-04-04 01:24:28.693494 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:24:28.693587 | orchestrator | | ID | Name | Status | 2026-04-04 01:24:28.693597 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:24:28.693603 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | 2026-04-04 01:24:28.693609 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:24:28.970491 | orchestrator | + server_ping 2026-04-04 01:24:28.971201 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:24:28.971248 | orchestrator | ++ tr -d '\r' 2026-04-04 01:24:31.847497 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:24:31.847573 | orchestrator | + ping -c3 192.168.112.118 2026-04-04 01:24:31.857666 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-04 01:24:31.857739 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=6.40 ms 2026-04-04 01:24:32.855691 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=2.66 ms 2026-04-04 01:24:33.857218 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.61 ms 2026-04-04 01:24:33.857328 | orchestrator | 2026-04-04 01:24:33.857342 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-04 01:24:33.857998 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:24:33.858065 | orchestrator | rtt min/avg/max/mdev = 1.612/3.555/6.395/2.052 ms 2026-04-04 01:24:33.858076 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:24:33.858082 | orchestrator | + ping -c3 192.168.112.179 2026-04-04 01:24:33.865782 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-04 01:24:33.865865 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.24 ms 2026-04-04 01:24:34.865170 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.00 ms 2026-04-04 01:24:35.866347 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.96 ms 2026-04-04 01:24:35.866439 | orchestrator | 2026-04-04 01:24:35.866452 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-04 01:24:35.866460 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:24:35.866468 | orchestrator | rtt min/avg/max/mdev = 1.958/2.731/4.240/1.067 ms 2026-04-04 01:24:35.866640 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:24:35.866816 | orchestrator | + ping -c3 192.168.112.127 2026-04-04 01:24:35.880171 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-04-04 01:24:35.880340 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=8.58 ms 2026-04-04 01:24:36.875665 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.20 ms 2026-04-04 01:24:37.876561 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.71 ms 2026-04-04 01:24:37.876659 | orchestrator | 2026-04-04 01:24:37.876672 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-04-04 01:24:37.876682 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:24:37.876690 | orchestrator | rtt min/avg/max/mdev = 1.706/4.162/8.576/3.127 ms 2026-04-04 01:24:37.877233 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:24:37.877253 | orchestrator | + ping -c3 192.168.112.161 2026-04-04 01:24:37.892140 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-04-04 01:24:37.892208 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=11.1 ms 2026-04-04 01:24:38.884743 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.33 ms 2026-04-04 01:24:39.886393 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.82 ms 2026-04-04 01:24:39.886486 | orchestrator | 2026-04-04 01:24:39.886496 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-04-04 01:24:39.886504 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:24:39.886512 | orchestrator | rtt min/avg/max/mdev = 1.820/5.072/11.069/4.245 ms 2026-04-04 01:24:39.886584 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:24:39.886594 | orchestrator | + ping -c3 192.168.112.131 2026-04-04 01:24:39.897626 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-04 01:24:39.897720 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=6.65 ms 2026-04-04 01:24:40.894863 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=1.96 ms 2026-04-04 01:24:41.895978 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.76 ms 2026-04-04 01:24:41.896053 | orchestrator | 2026-04-04 01:24:41.896060 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-04 01:24:41.896066 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:24:41.896071 | orchestrator | rtt min/avg/max/mdev = 1.755/3.456/6.652/2.261 ms 2026-04-04 01:24:41.896703 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2026-04-04 01:24:43.420419 | orchestrator | 2026-04-04 01:24:43 | ERROR  | Unable to get ansible vault password 2026-04-04 01:24:43.420498 | orchestrator | 2026-04-04 01:24:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:24:43.420513 | orchestrator | 2026-04-04 01:24:43 | ERROR  | Dropping encrypted entries 2026-04-04 01:24:44.883370 | orchestrator | 2026-04-04 01:24:44 | INFO  | Live migrating server bc3ffe66-3feb-4ce0-8ead-f1358f250a3b 2026-04-04 01:24:55.143605 | orchestrator | 2026-04-04 01:24:55 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:24:57.533931 | orchestrator | 2026-04-04 01:24:57 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:00.063372 | orchestrator | 2026-04-04 01:25:00 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:02.520858 | orchestrator | 2026-04-04 01:25:02 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:04.755149 | orchestrator | 2026-04-04 01:25:04 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:07.037128 | orchestrator | 2026-04-04 01:25:07 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:09.345427 | orchestrator | 2026-04-04 01:25:09 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:11.658631 | orchestrator | 2026-04-04 01:25:11 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:13.942513 | orchestrator | 2026-04-04 01:25:13 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) completed with status ACTIVE 2026-04-04 01:25:14.238732 | orchestrator | + compute_list 2026-04-04 01:25:14.238822 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:25:15.791064 | orchestrator | 2026-04-04 01:25:15 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:15.791139 | orchestrator | 2026-04-04 01:25:15 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:15.791147 | orchestrator | 2026-04-04 01:25:15 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:17.363064 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:17.363152 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:17.363163 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:25:17.363170 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | 2026-04-04 01:25:17.363176 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | 2026-04-04 01:25:17.363183 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | 2026-04-04 01:25:17.363188 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | 2026-04-04 01:25:17.363195 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | 2026-04-04 01:25:17.363202 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:25:17.640760 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:25:19.254108 | orchestrator | 2026-04-04 01:25:19 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:19.254290 | orchestrator | 2026-04-04 01:25:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:19.254312 | orchestrator | 2026-04-04 01:25:19 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:20.386717 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:20.386836 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:20.386847 | orchestrator | |------+--------+----------| 2026-04-04 01:25:20.386854 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:20.703938 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:25:22.272621 | orchestrator | 2026-04-04 01:25:22 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:22.272695 | orchestrator | 2026-04-04 01:25:22 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:22.272703 | orchestrator | 2026-04-04 01:25:22 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:23.334743 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:23.334796 | orchestrator | | ID | Name | Status | 2026-04-04 01:25:23.334802 | orchestrator | |------+--------+----------| 2026-04-04 01:25:23.334807 | orchestrator | +------+--------+----------+ 2026-04-04 01:25:23.609672 | orchestrator | + server_ping 2026-04-04 01:25:23.610668 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:25:23.610707 | orchestrator | ++ tr -d '\r' 2026-04-04 01:25:26.313676 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:26.313917 | orchestrator | + ping -c3 192.168.112.118 2026-04-04 01:25:26.324005 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-04 01:25:26.324060 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=6.25 ms 2026-04-04 01:25:27.320884 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=1.50 ms 2026-04-04 01:25:28.322774 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.41 ms 2026-04-04 01:25:28.323412 | orchestrator | 2026-04-04 01:25:28.323458 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-04 01:25:28.323468 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:25:28.323476 | orchestrator | rtt min/avg/max/mdev = 1.410/3.050/6.247/2.260 ms 2026-04-04 01:25:28.323484 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:28.323491 | orchestrator | + ping -c3 192.168.112.179 2026-04-04 01:25:28.332104 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-04 01:25:28.332174 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.44 ms 2026-04-04 01:25:29.330879 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.52 ms 2026-04-04 01:25:30.332200 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.81 ms 2026-04-04 01:25:30.332313 | orchestrator | 2026-04-04 01:25:30.332324 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-04 01:25:30.332332 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:25:30.332339 | orchestrator | rtt min/avg/max/mdev = 1.522/2.589/4.440/1.313 ms 2026-04-04 01:25:30.333193 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:30.333258 | orchestrator | + ping -c3 192.168.112.127 2026-04-04 01:25:30.346816 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-04-04 01:25:30.346891 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=8.69 ms 2026-04-04 01:25:31.342101 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=1.87 ms 2026-04-04 01:25:32.343886 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-04 01:25:32.344018 | orchestrator | 2026-04-04 01:25:32.344031 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-04-04 01:25:32.344039 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:25:32.344046 | orchestrator | rtt min/avg/max/mdev = 1.835/4.130/8.688/3.223 ms 2026-04-04 01:25:32.344107 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:32.344118 | orchestrator | + ping -c3 192.168.112.161 2026-04-04 01:25:32.359329 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-04-04 01:25:32.359423 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=10.4 ms 2026-04-04 01:25:33.352317 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=1.77 ms 2026-04-04 01:25:34.354359 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.35 ms 2026-04-04 01:25:34.354417 | orchestrator | 2026-04-04 01:25:34.354427 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-04-04 01:25:34.354435 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:25:34.354441 | orchestrator | rtt min/avg/max/mdev = 1.349/4.508/10.406/4.174 ms 2026-04-04 01:25:34.354448 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:25:34.354455 | orchestrator | + ping -c3 192.168.112.131 2026-04-04 01:25:34.363724 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-04 01:25:34.363773 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=4.64 ms 2026-04-04 01:25:35.362609 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=1.54 ms 2026-04-04 01:25:36.364415 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.46 ms 2026-04-04 01:25:36.364510 | orchestrator | 2026-04-04 01:25:36.364521 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-04 01:25:36.364530 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:25:36.364539 | orchestrator | rtt min/avg/max/mdev = 1.461/2.546/4.636/1.478 ms 2026-04-04 01:25:36.365102 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2026-04-04 01:25:37.919187 | orchestrator | 2026-04-04 01:25:37 | ERROR  | Unable to get ansible vault password 2026-04-04 01:25:37.919333 | orchestrator | 2026-04-04 01:25:37 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:25:37.919347 | orchestrator | 2026-04-04 01:25:37 | ERROR  | Dropping encrypted entries 2026-04-04 01:25:39.589667 | orchestrator | 2026-04-04 01:25:39 | INFO  | Live migrating server bc3ffe66-3feb-4ce0-8ead-f1358f250a3b 2026-04-04 01:25:52.742190 | orchestrator | 2026-04-04 01:25:52 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:55.124382 | orchestrator | 2026-04-04 01:25:55 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:57.414219 | orchestrator | 2026-04-04 01:25:57 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:25:59.825021 | orchestrator | 2026-04-04 01:25:59 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:26:02.200543 | orchestrator | 2026-04-04 01:26:02 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:26:04.421727 | orchestrator | 2026-04-04 01:26:04 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:26:06.670505 | orchestrator | 2026-04-04 01:26:06 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:26:09.015620 | orchestrator | 2026-04-04 01:26:09 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:26:11.378491 | orchestrator | 2026-04-04 01:26:11 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) completed with status ACTIVE 2026-04-04 01:26:11.378562 | orchestrator | 2026-04-04 01:26:11 | INFO  | Live migrating server ab9136b0-9cdc-439d-90aa-fd8665daef3e 2026-04-04 01:26:22.125352 | orchestrator | 2026-04-04 01:26:22 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:24.525490 | orchestrator | 2026-04-04 01:26:24 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:26.837569 | orchestrator | 2026-04-04 01:26:26 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:29.214554 | orchestrator | 2026-04-04 01:26:29 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:31.468558 | orchestrator | 2026-04-04 01:26:31 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:33.713226 | orchestrator | 2026-04-04 01:26:33 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:36.037309 | orchestrator | 2026-04-04 01:26:36 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:38.375448 | orchestrator | 2026-04-04 01:26:38 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:26:40.674311 | orchestrator | 2026-04-04 01:26:40 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) completed with status ACTIVE 2026-04-04 01:26:40.674370 | orchestrator | 2026-04-04 01:26:40 | INFO  | Live migrating server 1dcd1e21-dcd7-40f0-9206-29421e510440 2026-04-04 01:26:50.608468 | orchestrator | 2026-04-04 01:26:50 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:26:52.913237 | orchestrator | 2026-04-04 01:26:52 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:26:55.187581 | orchestrator | 2026-04-04 01:26:55 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:26:57.548763 | orchestrator | 2026-04-04 01:26:57 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:26:59.852996 | orchestrator | 2026-04-04 01:26:59 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:27:02.162804 | orchestrator | 2026-04-04 01:27:02 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:27:04.508861 | orchestrator | 2026-04-04 01:27:04 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:27:06.812642 | orchestrator | 2026-04-04 01:27:06 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:27:09.173437 | orchestrator | 2026-04-04 01:27:09 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:27:11.559891 | orchestrator | 2026-04-04 01:27:11 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) completed with status ACTIVE 2026-04-04 01:27:11.560003 | orchestrator | 2026-04-04 01:27:11 | INFO  | Live migrating server 588dea88-0e26-4241-9445-e3c230ca9c0b 2026-04-04 01:27:21.790630 | orchestrator | 2026-04-04 01:27:21 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:24.243490 | orchestrator | 2026-04-04 01:27:24 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:26.635372 | orchestrator | 2026-04-04 01:27:26 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:28.931225 | orchestrator | 2026-04-04 01:27:28 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:31.369738 | orchestrator | 2026-04-04 01:27:31 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:33.704204 | orchestrator | 2026-04-04 01:27:33 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:36.002821 | orchestrator | 2026-04-04 01:27:36 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:38.323001 | orchestrator | 2026-04-04 01:27:38 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:40.544134 | orchestrator | 2026-04-04 01:27:40 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:42.790659 | orchestrator | 2026-04-04 01:27:42 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:27:45.113538 | orchestrator | 2026-04-04 01:27:45 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) completed with status ACTIVE 2026-04-04 01:27:45.113641 | orchestrator | 2026-04-04 01:27:45 | INFO  | Live migrating server 760733d9-4799-478b-bdbd-302dde1eb789 2026-04-04 01:27:54.668601 | orchestrator | 2026-04-04 01:27:54 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:27:56.953011 | orchestrator | 2026-04-04 01:27:56 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:27:59.308130 | orchestrator | 2026-04-04 01:27:59 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:01.661191 | orchestrator | 2026-04-04 01:28:01 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:03.937426 | orchestrator | 2026-04-04 01:28:03 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:06.164216 | orchestrator | 2026-04-04 01:28:06 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:08.546320 | orchestrator | 2026-04-04 01:28:08 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:10.824816 | orchestrator | 2026-04-04 01:28:10 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:13.120970 | orchestrator | 2026-04-04 01:28:13 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:28:15.361117 | orchestrator | 2026-04-04 01:28:15 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) completed with status ACTIVE 2026-04-04 01:28:15.552526 | orchestrator | + compute_list 2026-04-04 01:28:15.552597 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:28:17.100768 | orchestrator | 2026-04-04 01:28:17 | ERROR  | Unable to get ansible vault password 2026-04-04 01:28:17.100869 | orchestrator | 2026-04-04 01:28:17 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:28:17.100882 | orchestrator | 2026-04-04 01:28:17 | ERROR  | Dropping encrypted entries 2026-04-04 01:28:18.202216 | orchestrator | +------+--------+----------+ 2026-04-04 01:28:18.202288 | orchestrator | | ID | Name | Status | 2026-04-04 01:28:18.202295 | orchestrator | |------+--------+----------| 2026-04-04 01:28:18.202299 | orchestrator | +------+--------+----------+ 2026-04-04 01:28:18.465932 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:28:19.993214 | orchestrator | 2026-04-04 01:28:19 | ERROR  | Unable to get ansible vault password 2026-04-04 01:28:19.993300 | orchestrator | 2026-04-04 01:28:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:28:19.993310 | orchestrator | 2026-04-04 01:28:19 | ERROR  | Dropping encrypted entries 2026-04-04 01:28:21.603886 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:28:21.603974 | orchestrator | | ID | Name | Status | 2026-04-04 01:28:21.603985 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:28:21.603993 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | 2026-04-04 01:28:21.604000 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | 2026-04-04 01:28:21.604015 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | 2026-04-04 01:28:21.604022 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | 2026-04-04 01:28:21.604029 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | 2026-04-04 01:28:21.604035 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:28:21.865838 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:28:23.436597 | orchestrator | 2026-04-04 01:28:23 | ERROR  | Unable to get ansible vault password 2026-04-04 01:28:23.436654 | orchestrator | 2026-04-04 01:28:23 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:28:23.436664 | orchestrator | 2026-04-04 01:28:23 | ERROR  | Dropping encrypted entries 2026-04-04 01:28:24.487151 | orchestrator | +------+--------+----------+ 2026-04-04 01:28:24.487209 | orchestrator | | ID | Name | Status | 2026-04-04 01:28:24.487217 | orchestrator | |------+--------+----------| 2026-04-04 01:28:24.487224 | orchestrator | +------+--------+----------+ 2026-04-04 01:28:24.757697 | orchestrator | + server_ping 2026-04-04 01:28:24.759044 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:28:24.759120 | orchestrator | ++ tr -d '\r' 2026-04-04 01:28:27.614336 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:28:27.614386 | orchestrator | + ping -c3 192.168.112.118 2026-04-04 01:28:27.622472 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-04 01:28:27.622519 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=5.08 ms 2026-04-04 01:28:28.620663 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=1.72 ms 2026-04-04 01:28:29.623119 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.98 ms 2026-04-04 01:28:29.623191 | orchestrator | 2026-04-04 01:28:29.623198 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-04 01:28:29.623203 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:28:29.623208 | orchestrator | rtt min/avg/max/mdev = 1.719/2.927/5.082/1.527 ms 2026-04-04 01:28:29.623703 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:28:29.623770 | orchestrator | + ping -c3 192.168.112.179 2026-04-04 01:28:29.633609 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-04 01:28:29.633697 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=5.73 ms 2026-04-04 01:28:30.631344 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.14 ms 2026-04-04 01:28:31.633344 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.99 ms 2026-04-04 01:28:31.633410 | orchestrator | 2026-04-04 01:28:31.633417 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-04 01:28:31.633423 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-04 01:28:31.633428 | orchestrator | rtt min/avg/max/mdev = 1.990/3.286/5.729/1.728 ms 2026-04-04 01:28:31.633433 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:28:31.633437 | orchestrator | + ping -c3 192.168.112.127 2026-04-04 01:28:31.646194 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-04-04 01:28:31.646265 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.80 ms 2026-04-04 01:28:32.641738 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=1.42 ms 2026-04-04 01:28:33.643831 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.61 ms 2026-04-04 01:28:33.643909 | orchestrator | 2026-04-04 01:28:33.643921 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-04-04 01:28:33.643929 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:28:33.643936 | orchestrator | rtt min/avg/max/mdev = 1.419/3.608/7.795/2.961 ms 2026-04-04 01:28:33.644544 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:28:33.644562 | orchestrator | + ping -c3 192.168.112.161 2026-04-04 01:28:33.653791 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-04-04 01:28:33.653846 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=5.37 ms 2026-04-04 01:28:34.651891 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=1.51 ms 2026-04-04 01:28:35.654707 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.98 ms 2026-04-04 01:28:35.654801 | orchestrator | 2026-04-04 01:28:35.654812 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-04-04 01:28:35.654822 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:28:35.654829 | orchestrator | rtt min/avg/max/mdev = 1.509/2.953/5.373/1.721 ms 2026-04-04 01:28:35.654837 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:28:35.654845 | orchestrator | + ping -c3 192.168.112.131 2026-04-04 01:28:35.670267 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-04 01:28:35.670363 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=10.8 ms 2026-04-04 01:28:36.663938 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.35 ms 2026-04-04 01:28:37.664748 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.63 ms 2026-04-04 01:28:37.664833 | orchestrator | 2026-04-04 01:28:37.664841 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-04 01:28:37.664878 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:28:37.664882 | orchestrator | rtt min/avg/max/mdev = 1.627/4.932/10.818/4.172 ms 2026-04-04 01:28:37.665197 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2026-04-04 01:28:39.226868 | orchestrator | 2026-04-04 01:28:39 | ERROR  | Unable to get ansible vault password 2026-04-04 01:28:39.227645 | orchestrator | 2026-04-04 01:28:39 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:28:39.227681 | orchestrator | 2026-04-04 01:28:39 | ERROR  | Dropping encrypted entries 2026-04-04 01:28:40.787619 | orchestrator | 2026-04-04 01:28:40 | INFO  | Live migrating server bc3ffe66-3feb-4ce0-8ead-f1358f250a3b 2026-04-04 01:28:51.394490 | orchestrator | 2026-04-04 01:28:51 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:28:53.698223 | orchestrator | 2026-04-04 01:28:53 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:28:56.068104 | orchestrator | 2026-04-04 01:28:56 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:28:58.450931 | orchestrator | 2026-04-04 01:28:58 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:29:00.871342 | orchestrator | 2026-04-04 01:29:00 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:29:03.195902 | orchestrator | 2026-04-04 01:29:03 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:29:05.502531 | orchestrator | 2026-04-04 01:29:05 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:29:07.891908 | orchestrator | 2026-04-04 01:29:07 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) is still in progress 2026-04-04 01:29:10.198665 | orchestrator | 2026-04-04 01:29:10 | INFO  | Live migration of bc3ffe66-3feb-4ce0-8ead-f1358f250a3b (test-3) completed with status ACTIVE 2026-04-04 01:29:10.198815 | orchestrator | 2026-04-04 01:29:10 | INFO  | Live migrating server ab9136b0-9cdc-439d-90aa-fd8665daef3e 2026-04-04 01:29:20.345966 | orchestrator | 2026-04-04 01:29:20 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:22.607652 | orchestrator | 2026-04-04 01:29:22 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:25.106433 | orchestrator | 2026-04-04 01:29:25 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:27.327976 | orchestrator | 2026-04-04 01:29:27 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:29.549638 | orchestrator | 2026-04-04 01:29:29 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:31.915945 | orchestrator | 2026-04-04 01:29:31 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:34.286245 | orchestrator | 2026-04-04 01:29:34 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:36.503949 | orchestrator | 2026-04-04 01:29:36 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) is still in progress 2026-04-04 01:29:38.705225 | orchestrator | 2026-04-04 01:29:38 | INFO  | Live migration of ab9136b0-9cdc-439d-90aa-fd8665daef3e (test-4) completed with status ACTIVE 2026-04-04 01:29:38.705345 | orchestrator | 2026-04-04 01:29:38 | INFO  | Live migrating server 1dcd1e21-dcd7-40f0-9206-29421e510440 2026-04-04 01:29:48.240567 | orchestrator | 2026-04-04 01:29:48 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:29:50.656948 | orchestrator | 2026-04-04 01:29:50 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:29:53.034371 | orchestrator | 2026-04-04 01:29:53 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:29:55.425656 | orchestrator | 2026-04-04 01:29:55 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:29:57.805006 | orchestrator | 2026-04-04 01:29:57 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:30:00.142735 | orchestrator | 2026-04-04 01:30:00 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:30:02.444050 | orchestrator | 2026-04-04 01:30:02 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:30:04.648340 | orchestrator | 2026-04-04 01:30:04 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) is still in progress 2026-04-04 01:30:06.957241 | orchestrator | 2026-04-04 01:30:06 | INFO  | Live migration of 1dcd1e21-dcd7-40f0-9206-29421e510440 (test-2) completed with status ACTIVE 2026-04-04 01:30:06.957299 | orchestrator | 2026-04-04 01:30:06 | INFO  | Live migrating server 588dea88-0e26-4241-9445-e3c230ca9c0b 2026-04-04 01:30:17.009085 | orchestrator | 2026-04-04 01:30:17 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:19.297030 | orchestrator | 2026-04-04 01:30:19 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:21.714828 | orchestrator | 2026-04-04 01:30:21 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:24.337775 | orchestrator | 2026-04-04 01:30:24 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:26.693220 | orchestrator | 2026-04-04 01:30:26 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:28.972269 | orchestrator | 2026-04-04 01:30:28 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:31.264215 | orchestrator | 2026-04-04 01:30:31 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:33.573769 | orchestrator | 2026-04-04 01:30:33 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:35.916321 | orchestrator | 2026-04-04 01:30:35 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:38.274153 | orchestrator | 2026-04-04 01:30:38 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) is still in progress 2026-04-04 01:30:40.481232 | orchestrator | 2026-04-04 01:30:40 | INFO  | Live migration of 588dea88-0e26-4241-9445-e3c230ca9c0b (test) completed with status ACTIVE 2026-04-04 01:30:40.481281 | orchestrator | 2026-04-04 01:30:40 | INFO  | Live migrating server 760733d9-4799-478b-bdbd-302dde1eb789 2026-04-04 01:30:51.069967 | orchestrator | 2026-04-04 01:30:51 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:30:53.422207 | orchestrator | 2026-04-04 01:30:53 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:30:55.783096 | orchestrator | 2026-04-04 01:30:55 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:30:58.130411 | orchestrator | 2026-04-04 01:30:58 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:00.522738 | orchestrator | 2026-04-04 01:31:00 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:02.749282 | orchestrator | 2026-04-04 01:31:02 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:05.047335 | orchestrator | 2026-04-04 01:31:05 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:07.447345 | orchestrator | 2026-04-04 01:31:07 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:09.736003 | orchestrator | 2026-04-04 01:31:09 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) is still in progress 2026-04-04 01:31:12.023616 | orchestrator | 2026-04-04 01:31:12 | INFO  | Live migration of 760733d9-4799-478b-bdbd-302dde1eb789 (test-1) completed with status ACTIVE 2026-04-04 01:31:12.308779 | orchestrator | + compute_list 2026-04-04 01:31:12.308824 | orchestrator | + osism manage compute list testbed-node-3 2026-04-04 01:31:13.750010 | orchestrator | 2026-04-04 01:31:13 | ERROR  | Unable to get ansible vault password 2026-04-04 01:31:13.750166 | orchestrator | 2026-04-04 01:31:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:31:13.750181 | orchestrator | 2026-04-04 01:31:13 | ERROR  | Dropping encrypted entries 2026-04-04 01:31:14.909295 | orchestrator | +------+--------+----------+ 2026-04-04 01:31:14.909377 | orchestrator | | ID | Name | Status | 2026-04-04 01:31:14.909383 | orchestrator | |------+--------+----------| 2026-04-04 01:31:14.909387 | orchestrator | +------+--------+----------+ 2026-04-04 01:31:15.192828 | orchestrator | + osism manage compute list testbed-node-4 2026-04-04 01:31:16.648000 | orchestrator | 2026-04-04 01:31:16 | ERROR  | Unable to get ansible vault password 2026-04-04 01:31:16.648094 | orchestrator | 2026-04-04 01:31:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:31:16.648107 | orchestrator | 2026-04-04 01:31:16 | ERROR  | Dropping encrypted entries 2026-04-04 01:31:17.769310 | orchestrator | +------+--------+----------+ 2026-04-04 01:31:17.769395 | orchestrator | | ID | Name | Status | 2026-04-04 01:31:17.769402 | orchestrator | |------+--------+----------| 2026-04-04 01:31:17.769406 | orchestrator | +------+--------+----------+ 2026-04-04 01:31:18.048843 | orchestrator | + osism manage compute list testbed-node-5 2026-04-04 01:31:19.566180 | orchestrator | 2026-04-04 01:31:19 | ERROR  | Unable to get ansible vault password 2026-04-04 01:31:19.566240 | orchestrator | 2026-04-04 01:31:19 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-04 01:31:19.566251 | orchestrator | 2026-04-04 01:31:19 | ERROR  | Dropping encrypted entries 2026-04-04 01:31:21.115144 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:31:21.115236 | orchestrator | | ID | Name | Status | 2026-04-04 01:31:21.115244 | orchestrator | |--------------------------------------+--------+----------| 2026-04-04 01:31:21.115250 | orchestrator | | bc3ffe66-3feb-4ce0-8ead-f1358f250a3b | test-3 | ACTIVE | 2026-04-04 01:31:21.115257 | orchestrator | | ab9136b0-9cdc-439d-90aa-fd8665daef3e | test-4 | ACTIVE | 2026-04-04 01:31:21.115290 | orchestrator | | 1dcd1e21-dcd7-40f0-9206-29421e510440 | test-2 | ACTIVE | 2026-04-04 01:31:21.115297 | orchestrator | | 588dea88-0e26-4241-9445-e3c230ca9c0b | test | ACTIVE | 2026-04-04 01:31:21.115304 | orchestrator | | 760733d9-4799-478b-bdbd-302dde1eb789 | test-1 | ACTIVE | 2026-04-04 01:31:21.115310 | orchestrator | +--------------------------------------+--------+----------+ 2026-04-04 01:31:21.382769 | orchestrator | + server_ping 2026-04-04 01:31:21.383847 | orchestrator | ++ tr -d '\r' 2026-04-04 01:31:21.383915 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-04 01:31:24.182307 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:31:24.182399 | orchestrator | + ping -c3 192.168.112.118 2026-04-04 01:31:24.194852 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-04 01:31:24.195034 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=10.0 ms 2026-04-04 01:31:25.187709 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=1.63 ms 2026-04-04 01:31:26.190397 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.51 ms 2026-04-04 01:31:26.190454 | orchestrator | 2026-04-04 01:31:26.190461 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-04 01:31:26.190466 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:31:26.190471 | orchestrator | rtt min/avg/max/mdev = 1.505/4.386/10.022/3.985 ms 2026-04-04 01:31:26.190479 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:31:26.190486 | orchestrator | + ping -c3 192.168.112.179 2026-04-04 01:31:26.196177 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-04-04 01:31:26.196224 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=4.07 ms 2026-04-04 01:31:27.194607 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=1.32 ms 2026-04-04 01:31:28.195754 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.17 ms 2026-04-04 01:31:28.196270 | orchestrator | 2026-04-04 01:31:28.196295 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-04-04 01:31:28.196305 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-04 01:31:28.196310 | orchestrator | rtt min/avg/max/mdev = 1.170/2.186/4.073/1.335 ms 2026-04-04 01:31:28.196375 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:31:28.196383 | orchestrator | + ping -c3 192.168.112.127 2026-04-04 01:31:28.208763 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-04-04 01:31:28.208822 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=7.56 ms 2026-04-04 01:31:29.205901 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=1.97 ms 2026-04-04 01:31:30.206684 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.48 ms 2026-04-04 01:31:30.206785 | orchestrator | 2026-04-04 01:31:30.206796 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-04-04 01:31:30.206805 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-04-04 01:31:30.206813 | orchestrator | rtt min/avg/max/mdev = 1.475/3.665/7.556/2.758 ms 2026-04-04 01:31:30.207082 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:31:30.207100 | orchestrator | + ping -c3 192.168.112.161 2026-04-04 01:31:30.220829 | orchestrator | PING 192.168.112.161 (192.168.112.161) 56(84) bytes of data. 2026-04-04 01:31:30.220966 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=1 ttl=63 time=8.92 ms 2026-04-04 01:31:31.215254 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=2 ttl=63 time=2.10 ms 2026-04-04 01:31:32.216774 | orchestrator | 64 bytes from 192.168.112.161: icmp_seq=3 ttl=63 time=1.68 ms 2026-04-04 01:31:32.216923 | orchestrator | 2026-04-04 01:31:32.216935 | orchestrator | --- 192.168.112.161 ping statistics --- 2026-04-04 01:31:32.216957 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-04 01:31:32.216962 | orchestrator | rtt min/avg/max/mdev = 1.679/4.234/8.921/3.318 ms 2026-04-04 01:31:32.216967 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-04 01:31:32.216991 | orchestrator | + ping -c3 192.168.112.131 2026-04-04 01:31:32.227433 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-04 01:31:32.227522 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=7.30 ms 2026-04-04 01:31:33.223723 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.07 ms 2026-04-04 01:31:34.225512 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.90 ms 2026-04-04 01:31:34.225627 | orchestrator | 2026-04-04 01:31:34.225638 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-04 01:31:34.225647 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-04 01:31:34.225654 | orchestrator | rtt min/avg/max/mdev = 1.904/3.758/7.299/2.504 ms 2026-04-04 01:31:34.390655 | orchestrator | ok: Runtime: 0:18:37.695926 2026-04-04 01:31:34.445598 | 2026-04-04 01:31:34.446129 | TASK [Run tempest] 2026-04-04 01:31:35.196895 | orchestrator | + set -e 2026-04-04 01:31:35.197066 | orchestrator | + source /opt/manager-vars.sh 2026-04-04 01:31:35.197088 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-04 01:31:35.197096 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-04 01:31:35.197104 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-04 01:31:35.197111 | orchestrator | ++ CEPH_VERSION=reef 2026-04-04 01:31:35.197120 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-04 01:31:35.197150 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-04 01:31:35.197163 | orchestrator | ++ export MANAGER_VERSION=latest 2026-04-04 01:31:35.197172 | orchestrator | ++ MANAGER_VERSION=latest 2026-04-04 01:31:35.197177 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-04 01:31:35.197184 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-04 01:31:35.197188 | orchestrator | ++ export ARA=false 2026-04-04 01:31:35.197193 | orchestrator | ++ ARA=false 2026-04-04 01:31:35.197199 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-04 01:31:35.197203 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-04 01:31:35.197207 | orchestrator | ++ export TEMPEST=true 2026-04-04 01:31:35.197215 | orchestrator | ++ TEMPEST=true 2026-04-04 01:31:35.197219 | orchestrator | ++ export IS_ZUUL=true 2026-04-04 01:31:35.197223 | orchestrator | ++ IS_ZUUL=true 2026-04-04 01:31:35.197228 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:31:35.197232 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.76 2026-04-04 01:31:35.197237 | orchestrator | ++ export EXTERNAL_API=false 2026-04-04 01:31:35.197241 | orchestrator | ++ EXTERNAL_API=false 2026-04-04 01:31:35.197245 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-04 01:31:35.197249 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-04 01:31:35.197252 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-04 01:31:35.197256 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-04 01:31:35.197260 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-04 01:31:35.197264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-04 01:31:35.197269 | orchestrator | + echo 2026-04-04 01:31:35.197273 | orchestrator | 2026-04-04 01:31:35.197277 | orchestrator | # Tempest 2026-04-04 01:31:35.197281 | orchestrator | 2026-04-04 01:31:35.197285 | orchestrator | + echo '# Tempest' 2026-04-04 01:31:35.197289 | orchestrator | + echo 2026-04-04 01:31:35.197293 | orchestrator | + [[ ! -e /opt/tempest ]] 2026-04-04 01:31:35.198484 | orchestrator | + osism apply tempest --skip-tags run-tempest 2026-04-04 01:31:36.499019 | orchestrator | 2026-04-04 01:31:36 | INFO  | Prepare task for execution of tempest. 2026-04-04 01:31:36.564915 | orchestrator | 2026-04-04 01:31:36 | INFO  | Task a50f7c90-f04e-47e4-8c1f-b528885d54db (tempest) was prepared for execution. 2026-04-04 01:31:36.565015 | orchestrator | 2026-04-04 01:31:36 | INFO  | It takes a moment until task a50f7c90-f04e-47e4-8c1f-b528885d54db (tempest) has been started and output is visible here. 2026-04-04 01:32:50.364965 | orchestrator | 2026-04-04 01:32:50.365036 | orchestrator | PLAY [Run tempest] ************************************************************* 2026-04-04 01:32:50.365044 | orchestrator | 2026-04-04 01:32:50.365048 | orchestrator | TASK [osism.validations.tempest : Create tempest workdir] ********************** 2026-04-04 01:32:50.365062 | orchestrator | Saturday 04 April 2026 01:31:39 +0000 (0:00:00.300) 0:00:00.300 ******** 2026-04-04 01:32:50.365069 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365076 | orchestrator | 2026-04-04 01:32:50.365083 | orchestrator | TASK [osism.validations.tempest : Copy tempest wrapper script] ***************** 2026-04-04 01:32:50.365089 | orchestrator | Saturday 04 April 2026 01:31:40 +0000 (0:00:01.054) 0:00:01.355 ******** 2026-04-04 01:32:50.365096 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365103 | orchestrator | 2026-04-04 01:32:50.365109 | orchestrator | TASK [osism.validations.tempest : Check for existing tempest initialisation] *** 2026-04-04 01:32:50.365115 | orchestrator | Saturday 04 April 2026 01:31:42 +0000 (0:00:01.200) 0:00:02.556 ******** 2026-04-04 01:32:50.365122 | orchestrator | ok: [testbed-manager] 2026-04-04 01:32:50.365129 | orchestrator | 2026-04-04 01:32:50.365135 | orchestrator | TASK [osism.validations.tempest : Init tempest] ******************************** 2026-04-04 01:32:50.365140 | orchestrator | Saturday 04 April 2026 01:31:42 +0000 (0:00:00.423) 0:00:02.979 ******** 2026-04-04 01:32:50.365147 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365153 | orchestrator | 2026-04-04 01:32:50.365164 | orchestrator | TASK [osism.validations.tempest : Resolve image IDs] *************************** 2026-04-04 01:32:50.365171 | orchestrator | Saturday 04 April 2026 01:32:02 +0000 (0:00:20.444) 0:00:23.423 ******** 2026-04-04 01:32:50.365195 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.3) 2026-04-04 01:32:50.365201 | orchestrator | ok: [testbed-manager -> localhost] => (item=Cirros 0.6.2) 2026-04-04 01:32:50.365210 | orchestrator | 2026-04-04 01:32:50.365215 | orchestrator | TASK [osism.validations.tempest : Assert images have been resolved] ************ 2026-04-04 01:32:50.365221 | orchestrator | Saturday 04 April 2026 01:32:11 +0000 (0:00:08.191) 0:00:31.614 ******** 2026-04-04 01:32:50.365227 | orchestrator | ok: [testbed-manager] => { 2026-04-04 01:32:50.365234 | orchestrator |  "changed": false, 2026-04-04 01:32:50.365240 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:32:50.365246 | orchestrator | } 2026-04-04 01:32:50.365252 | orchestrator | 2026-04-04 01:32:50.365259 | orchestrator | TASK [osism.validations.tempest : Get auth token] ****************************** 2026-04-04 01:32:50.365264 | orchestrator | Saturday 04 April 2026 01:32:11 +0000 (0:00:00.142) 0:00:31.757 ******** 2026-04-04 01:32:50.365271 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365277 | orchestrator | 2026-04-04 01:32:50.365283 | orchestrator | TASK [osism.validations.tempest : Get endpoint catalog] ************************ 2026-04-04 01:32:50.365290 | orchestrator | Saturday 04 April 2026 01:32:14 +0000 (0:00:03.549) 0:00:35.306 ******** 2026-04-04 01:32:50.365297 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365303 | orchestrator | 2026-04-04 01:32:50.365309 | orchestrator | TASK [osism.validations.tempest : Get service catalog] ************************* 2026-04-04 01:32:50.365315 | orchestrator | Saturday 04 April 2026 01:32:16 +0000 (0:00:01.795) 0:00:37.102 ******** 2026-04-04 01:32:50.365321 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365328 | orchestrator | 2026-04-04 01:32:50.365335 | orchestrator | TASK [osism.validations.tempest : Register img_file name] ********************** 2026-04-04 01:32:50.365343 | orchestrator | Saturday 04 April 2026 01:32:20 +0000 (0:00:03.793) 0:00:40.895 ******** 2026-04-04 01:32:50.365350 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365357 | orchestrator | 2026-04-04 01:32:50.365364 | orchestrator | TASK [osism.validations.tempest : Download img_file from image_ref] ************ 2026-04-04 01:32:50.365370 | orchestrator | Saturday 04 April 2026 01:32:20 +0000 (0:00:00.211) 0:00:41.107 ******** 2026-04-04 01:32:50.365377 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365383 | orchestrator | 2026-04-04 01:32:50.365391 | orchestrator | TASK [osism.validations.tempest : Install qemu-utils package] ****************** 2026-04-04 01:32:50.365397 | orchestrator | Saturday 04 April 2026 01:32:22 +0000 (0:00:02.065) 0:00:43.173 ******** 2026-04-04 01:32:50.365403 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365410 | orchestrator | 2026-04-04 01:32:50.365416 | orchestrator | TASK [osism.validations.tempest : Convert img_file to qcow2 format] ************ 2026-04-04 01:32:50.365422 | orchestrator | Saturday 04 April 2026 01:32:30 +0000 (0:00:08.260) 0:00:51.433 ******** 2026-04-04 01:32:50.365428 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365435 | orchestrator | 2026-04-04 01:32:50.365442 | orchestrator | TASK [osism.validations.tempest : Get network API extensions] ****************** 2026-04-04 01:32:50.365449 | orchestrator | Saturday 04 April 2026 01:32:31 +0000 (0:00:00.676) 0:00:52.110 ******** 2026-04-04 01:32:50.365455 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365462 | orchestrator | 2026-04-04 01:32:50.365469 | orchestrator | TASK [osism.validations.tempest : Revoke token] ******************************** 2026-04-04 01:32:50.365476 | orchestrator | Saturday 04 April 2026 01:32:33 +0000 (0:00:01.493) 0:00:53.603 ******** 2026-04-04 01:32:50.365482 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365489 | orchestrator | 2026-04-04 01:32:50.365496 | orchestrator | TASK [osism.validations.tempest : Set fact for config option api_extensions] *** 2026-04-04 01:32:50.365502 | orchestrator | Saturday 04 April 2026 01:32:34 +0000 (0:00:01.526) 0:00:55.130 ******** 2026-04-04 01:32:50.365508 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365514 | orchestrator | 2026-04-04 01:32:50.365521 | orchestrator | TASK [osism.validations.tempest : Set fact for config option img_file] ********* 2026-04-04 01:32:50.365535 | orchestrator | Saturday 04 April 2026 01:32:34 +0000 (0:00:00.180) 0:00:55.310 ******** 2026-04-04 01:32:50.365542 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365548 | orchestrator | 2026-04-04 01:32:50.365560 | orchestrator | TASK [osism.validations.tempest : Resolve floating network ID] ***************** 2026-04-04 01:32:50.365566 | orchestrator | Saturday 04 April 2026 01:32:35 +0000 (0:00:00.364) 0:00:55.675 ******** 2026-04-04 01:32:50.365573 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-04 01:32:50.365580 | orchestrator | 2026-04-04 01:32:50.365586 | orchestrator | TASK [osism.validations.tempest : Assert floating network id has been resolved] *** 2026-04-04 01:32:50.365605 | orchestrator | Saturday 04 April 2026 01:32:38 +0000 (0:00:03.747) 0:00:59.422 ******** 2026-04-04 01:32:50.365614 | orchestrator | ok: [testbed-manager -> localhost] => { 2026-04-04 01:32:50.365621 | orchestrator |  "changed": false, 2026-04-04 01:32:50.365628 | orchestrator |  "msg": "All assertions passed" 2026-04-04 01:32:50.365634 | orchestrator | } 2026-04-04 01:32:50.365641 | orchestrator | 2026-04-04 01:32:50.365649 | orchestrator | TASK [osism.validations.tempest : Resolve flavor IDs] ************************** 2026-04-04 01:32:50.365655 | orchestrator | Saturday 04 April 2026 01:32:39 +0000 (0:00:00.189) 0:00:59.611 ******** 2026-04-04 01:32:50.365662 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1})  2026-04-04 01:32:50.365669 | orchestrator | skipping: [testbed-manager] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2})  2026-04-04 01:32:50.365675 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:32:50.365682 | orchestrator | 2026-04-04 01:32:50.365689 | orchestrator | TASK [osism.validations.tempest : Assert flavors have been resolved] *********** 2026-04-04 01:32:50.365696 | orchestrator | Saturday 04 April 2026 01:32:39 +0000 (0:00:00.184) 0:00:59.796 ******** 2026-04-04 01:32:50.365702 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:32:50.365709 | orchestrator | 2026-04-04 01:32:50.365715 | orchestrator | TASK [osism.validations.tempest : Get stats of exclude list] ******************* 2026-04-04 01:32:50.365722 | orchestrator | Saturday 04 April 2026 01:32:39 +0000 (0:00:00.144) 0:00:59.940 ******** 2026-04-04 01:32:50.365729 | orchestrator | ok: [testbed-manager] 2026-04-04 01:32:50.365735 | orchestrator | 2026-04-04 01:32:50.365741 | orchestrator | TASK [osism.validations.tempest : Copy exclude list] *************************** 2026-04-04 01:32:50.365749 | orchestrator | Saturday 04 April 2026 01:32:39 +0000 (0:00:00.453) 0:01:00.394 ******** 2026-04-04 01:32:50.365756 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365776 | orchestrator | 2026-04-04 01:32:50.365783 | orchestrator | TASK [osism.validations.tempest : Get stats of include list] ******************* 2026-04-04 01:32:50.365789 | orchestrator | Saturday 04 April 2026 01:32:40 +0000 (0:00:00.878) 0:01:01.272 ******** 2026-04-04 01:32:50.365796 | orchestrator | ok: [testbed-manager] 2026-04-04 01:32:50.365803 | orchestrator | 2026-04-04 01:32:50.365809 | orchestrator | TASK [osism.validations.tempest : Copy include list] *************************** 2026-04-04 01:32:50.365816 | orchestrator | Saturday 04 April 2026 01:32:41 +0000 (0:00:00.409) 0:01:01.682 ******** 2026-04-04 01:32:50.365822 | orchestrator | skipping: [testbed-manager] 2026-04-04 01:32:50.365829 | orchestrator | 2026-04-04 01:32:50.365835 | orchestrator | TASK [osism.validations.tempest : Create tempest flavors] ********************** 2026-04-04 01:32:50.365840 | orchestrator | Saturday 04 April 2026 01:32:41 +0000 (0:00:00.310) 0:01:01.992 ******** 2026-04-04 01:32:50.365843 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-1', 'vcpus': 1, 'ram': 1024, 'disk': 1}) 2026-04-04 01:32:50.365847 | orchestrator | changed: [testbed-manager -> localhost] => (item={'name': 'tempest-2', 'vcpus': 2, 'ram': 2048, 'disk': 2}) 2026-04-04 01:32:50.365851 | orchestrator | 2026-04-04 01:32:50.365855 | orchestrator | TASK [osism.validations.tempest : Copy tempest.conf file] ********************** 2026-04-04 01:32:50.365859 | orchestrator | Saturday 04 April 2026 01:32:49 +0000 (0:00:07.820) 0:01:09.813 ******** 2026-04-04 01:32:50.365863 | orchestrator | changed: [testbed-manager] 2026-04-04 01:32:50.365866 | orchestrator | 2026-04-04 01:32:50.365875 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-04 01:32:50.365879 | orchestrator | testbed-manager : ok=24  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-04 01:32:50.365884 | orchestrator | 2026-04-04 01:32:50.365887 | orchestrator | 2026-04-04 01:32:50.365891 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-04 01:32:50.365895 | orchestrator | Saturday 04 April 2026 01:32:50 +0000 (0:00:00.985) 0:01:10.799 ******** 2026-04-04 01:32:50.365899 | orchestrator | =============================================================================== 2026-04-04 01:32:50.365902 | orchestrator | osism.validations.tempest : Init tempest ------------------------------- 20.44s 2026-04-04 01:32:50.365906 | orchestrator | osism.validations.tempest : Install qemu-utils package ------------------ 8.26s 2026-04-04 01:32:50.365910 | orchestrator | osism.validations.tempest : Resolve image IDs --------------------------- 8.19s 2026-04-04 01:32:50.365913 | orchestrator | osism.validations.tempest : Create tempest flavors ---------------------- 7.82s 2026-04-04 01:32:50.365920 | orchestrator | osism.validations.tempest : Get service catalog ------------------------- 3.79s 2026-04-04 01:32:50.365924 | orchestrator | osism.validations.tempest : Resolve floating network ID ----------------- 3.75s 2026-04-04 01:32:50.365928 | orchestrator | osism.validations.tempest : Get auth token ------------------------------ 3.55s 2026-04-04 01:32:50.365932 | orchestrator | osism.validations.tempest : Download img_file from image_ref ------------ 2.07s 2026-04-04 01:32:50.365936 | orchestrator | osism.validations.tempest : Get endpoint catalog ------------------------ 1.80s 2026-04-04 01:32:50.365939 | orchestrator | osism.validations.tempest : Revoke token -------------------------------- 1.53s 2026-04-04 01:32:50.365943 | orchestrator | osism.validations.tempest : Get network API extensions ------------------ 1.49s 2026-04-04 01:32:50.365947 | orchestrator | osism.validations.tempest : Copy tempest wrapper script ----------------- 1.20s 2026-04-04 01:32:50.365951 | orchestrator | osism.validations.tempest : Create tempest workdir ---------------------- 1.05s 2026-04-04 01:32:50.365954 | orchestrator | osism.validations.tempest : Copy tempest.conf file ---------------------- 0.99s 2026-04-04 01:32:50.365958 | orchestrator | osism.validations.tempest : Copy exclude list --------------------------- 0.88s 2026-04-04 01:32:50.365962 | orchestrator | osism.validations.tempest : Convert img_file to qcow2 format ------------ 0.68s 2026-04-04 01:32:50.365966 | orchestrator | osism.validations.tempest : Get stats of exclude list ------------------- 0.45s 2026-04-04 01:32:50.365974 | orchestrator | osism.validations.tempest : Check for existing tempest initialisation --- 0.42s 2026-04-04 01:32:50.620936 | orchestrator | osism.validations.tempest : Get stats of include list ------------------- 0.41s 2026-04-04 01:32:50.621011 | orchestrator | osism.validations.tempest : Set fact for config option img_file --------- 0.36s 2026-04-04 01:32:50.809694 | orchestrator | + sed -i '/log_dir =/d' /opt/tempest/etc/tempest.conf 2026-04-04 01:32:50.813037 | orchestrator | + sed -i '/log_file =/d' /opt/tempest/etc/tempest.conf 2026-04-04 01:32:50.816112 | orchestrator | 2026-04-04 01:32:50.816162 | orchestrator | ## IDENTITY (API) 2026-04-04 01:32:50.816168 | orchestrator | 2026-04-04 01:32:50.816173 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-04 01:32:50.816178 | orchestrator | + echo 2026-04-04 01:32:50.816183 | orchestrator | + echo '## IDENTITY (API)' 2026-04-04 01:32:50.816188 | orchestrator | + echo 2026-04-04 01:32:50.816193 | orchestrator | + _tempest tempest.api.identity.v3 2026-04-04 01:32:50.816199 | orchestrator | + local regex=tempest.api.identity.v3 2026-04-04 01:32:50.817077 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16 2026-04-04 01:32:50.817660 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:32:50.819542 | orchestrator | + tee -a /opt/tempest/20260404-0132.log 2026-04-04 01:32:54.506084 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.identity.v3 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:32:54.506168 | orchestrator | Did you mean one of these? 2026-04-04 01:32:54.506180 | orchestrator | help 2026-04-04 01:32:54.506189 | orchestrator | init 2026-04-04 01:32:54.846516 | orchestrator | 2026-04-04 01:32:54.846573 | orchestrator | ## IMAGE (API) 2026-04-04 01:32:54.846579 | orchestrator | 2026-04-04 01:32:54.846583 | orchestrator | + echo 2026-04-04 01:32:54.846587 | orchestrator | + echo '## IMAGE (API)' 2026-04-04 01:32:54.846592 | orchestrator | + echo 2026-04-04 01:32:54.846596 | orchestrator | + _tempest tempest.api.image.v2 2026-04-04 01:32:54.846601 | orchestrator | + local regex=tempest.api.image.v2 2026-04-04 01:32:54.847621 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16 2026-04-04 01:32:54.847674 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:32:54.850070 | orchestrator | + tee -a /opt/tempest/20260404-0132.log 2026-04-04 01:32:58.328898 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.image.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:32:58.328956 | orchestrator | Did you mean one of these? 2026-04-04 01:32:58.328963 | orchestrator | help 2026-04-04 01:32:58.328968 | orchestrator | init 2026-04-04 01:32:58.716037 | orchestrator | 2026-04-04 01:32:58.716098 | orchestrator | ## NETWORK (API) 2026-04-04 01:32:58.716104 | orchestrator | 2026-04-04 01:32:58.716109 | orchestrator | + echo 2026-04-04 01:32:58.716114 | orchestrator | + echo '## NETWORK (API)' 2026-04-04 01:32:58.716119 | orchestrator | + echo 2026-04-04 01:32:58.716123 | orchestrator | + _tempest tempest.api.network 2026-04-04 01:32:58.716130 | orchestrator | + local regex=tempest.api.network 2026-04-04 01:32:58.716936 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16 2026-04-04 01:32:58.718102 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:32:58.720668 | orchestrator | + tee -a /opt/tempest/20260404-0132.log 2026-04-04 01:33:02.216550 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.network --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:33:02.216653 | orchestrator | Did you mean one of these? 2026-04-04 01:33:02.216665 | orchestrator | help 2026-04-04 01:33:02.216672 | orchestrator | init 2026-04-04 01:33:02.559196 | orchestrator | 2026-04-04 01:33:02.559283 | orchestrator | ## VOLUME (API) 2026-04-04 01:33:02.559292 | orchestrator | 2026-04-04 01:33:02.559299 | orchestrator | + echo 2026-04-04 01:33:02.559306 | orchestrator | + echo '## VOLUME (API)' 2026-04-04 01:33:02.559315 | orchestrator | + echo 2026-04-04 01:33:02.559321 | orchestrator | + _tempest tempest.api.volume 2026-04-04 01:33:02.559328 | orchestrator | + local regex=tempest.api.volume 2026-04-04 01:33:02.559888 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16 2026-04-04 01:33:02.561002 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:33:02.564576 | orchestrator | + tee -a /opt/tempest/20260404-0133.log 2026-04-04 01:33:06.065792 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.volume --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:33:06.065887 | orchestrator | Did you mean one of these? 2026-04-04 01:33:06.065897 | orchestrator | help 2026-04-04 01:33:06.065902 | orchestrator | init 2026-04-04 01:33:06.403933 | orchestrator | 2026-04-04 01:33:06.404007 | orchestrator | ## COMPUTE (API) 2026-04-04 01:33:06.404017 | orchestrator | 2026-04-04 01:33:06.404022 | orchestrator | + echo 2026-04-04 01:33:06.404026 | orchestrator | + echo '## COMPUTE (API)' 2026-04-04 01:33:06.404031 | orchestrator | + echo 2026-04-04 01:33:06.404035 | orchestrator | + _tempest tempest.api.compute 2026-04-04 01:33:06.404063 | orchestrator | + local regex=tempest.api.compute 2026-04-04 01:33:06.404070 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16 2026-04-04 01:33:06.404534 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:33:06.407953 | orchestrator | + tee -a /opt/tempest/20260404-0133.log 2026-04-04 01:33:09.911449 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.compute --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:33:09.911519 | orchestrator | Did you mean one of these? 2026-04-04 01:33:09.911531 | orchestrator | help 2026-04-04 01:33:09.911537 | orchestrator | init 2026-04-04 01:33:10.262832 | orchestrator | 2026-04-04 01:33:10.262889 | orchestrator | ## DNS (API) 2026-04-04 01:33:10.262897 | orchestrator | 2026-04-04 01:33:10.262901 | orchestrator | + echo 2026-04-04 01:33:10.262906 | orchestrator | + echo '## DNS (API)' 2026-04-04 01:33:10.262911 | orchestrator | + echo 2026-04-04 01:33:10.262915 | orchestrator | + _tempest designate_tempest_plugin.tests.api.v2 2026-04-04 01:33:10.262919 | orchestrator | + local regex=designate_tempest_plugin.tests.api.v2 2026-04-04 01:33:10.262925 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16 2026-04-04 01:33:10.263761 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:33:10.266532 | orchestrator | + tee -a /opt/tempest/20260404-0133.log 2026-04-04 01:33:13.502275 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex designate_tempest_plugin.tests.api.v2 --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:33:13.502352 | orchestrator | Did you mean one of these? 2026-04-04 01:33:13.502362 | orchestrator | help 2026-04-04 01:33:13.502369 | orchestrator | init 2026-04-04 01:33:13.757680 | orchestrator | 2026-04-04 01:33:13.757751 | orchestrator | ## OBJECT-STORE (API) 2026-04-04 01:33:13.757758 | orchestrator | 2026-04-04 01:33:13.757763 | orchestrator | + echo 2026-04-04 01:33:13.757767 | orchestrator | + echo '## OBJECT-STORE (API)' 2026-04-04 01:33:13.757772 | orchestrator | + echo 2026-04-04 01:33:13.757776 | orchestrator | + _tempest tempest.api.object_storage 2026-04-04 01:33:13.757781 | orchestrator | + local regex=tempest.api.object_storage 2026-04-04 01:33:13.757786 | orchestrator | + docker run --rm -v /opt/tempest:/tempest -v /etc/ssl/certs:/etc/ssl/certs:ro -e PYTHONWARNINGS=ignore::SyntaxWarning --network host --name tempest registry.osism.tech/osism/tempest:latest run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16 2026-04-04 01:33:13.758284 | orchestrator | ++ date +%Y%m%d-%H%M 2026-04-04 01:33:13.760517 | orchestrator | + tee -a /opt/tempest/20260404-0133.log 2026-04-04 01:33:17.080338 | orchestrator | tempest: 'run --workspace-path /tempest/workspace.yaml --workspace tempest --exclude-list /tempest/exclude.lst --regex tempest.api.object_storage --concurrency 16' is not a tempest command. See 'tempest --help'. 2026-04-04 01:33:17.080429 | orchestrator | Did you mean one of these? 2026-04-04 01:33:17.080438 | orchestrator | help 2026-04-04 01:33:17.080443 | orchestrator | init 2026-04-04 01:33:17.547821 | orchestrator | ok: Runtime: 0:01:42.606909 2026-04-04 01:33:17.569744 | 2026-04-04 01:33:17.569877 | TASK [Check prometheus alert status] 2026-04-04 01:33:18.106173 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:18.109704 | 2026-04-04 01:33:18.109875 | PLAY RECAP 2026-04-04 01:33:18.110018 | orchestrator | ok: 25 changed: 12 unreachable: 0 failed: 0 skipped: 4 rescued: 0 ignored: 0 2026-04-04 01:33:18.110080 | 2026-04-04 01:33:18.352750 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-04-04 01:33:18.355577 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-04 01:33:19.158338 | 2026-04-04 01:33:19.158611 | PLAY [Post output play] 2026-04-04 01:33:19.175194 | 2026-04-04 01:33:19.175374 | LOOP [stage-output : Register sources] 2026-04-04 01:33:19.237606 | 2026-04-04 01:33:19.237843 | TASK [stage-output : Check sudo] 2026-04-04 01:33:20.123344 | orchestrator | sudo: a password is required 2026-04-04 01:33:20.275706 | orchestrator | ok: Runtime: 0:00:00.014215 2026-04-04 01:33:20.291764 | 2026-04-04 01:33:20.291928 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-04 01:33:20.332784 | 2026-04-04 01:33:20.333085 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-04 01:33:20.400218 | orchestrator | ok 2026-04-04 01:33:20.408959 | 2026-04-04 01:33:20.409096 | LOOP [stage-output : Ensure target folders exist] 2026-04-04 01:33:20.949454 | orchestrator | ok: "docs" 2026-04-04 01:33:20.949767 | 2026-04-04 01:33:21.191673 | orchestrator | ok: "artifacts" 2026-04-04 01:33:21.444914 | orchestrator | ok: "logs" 2026-04-04 01:33:21.465705 | 2026-04-04 01:33:21.465917 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-04 01:33:21.511272 | 2026-04-04 01:33:21.511666 | TASK [stage-output : Make all log files readable] 2026-04-04 01:33:21.812158 | orchestrator | ok 2026-04-04 01:33:21.820546 | 2026-04-04 01:33:21.820690 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-04 01:33:21.856254 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:21.877483 | 2026-04-04 01:33:21.877696 | TASK [stage-output : Discover log files for compression] 2026-04-04 01:33:21.903038 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:21.917517 | 2026-04-04 01:33:21.917675 | LOOP [stage-output : Archive everything from logs] 2026-04-04 01:33:21.958883 | 2026-04-04 01:33:21.959056 | PLAY [Post cleanup play] 2026-04-04 01:33:21.967033 | 2026-04-04 01:33:21.967136 | TASK [Set cloud fact (Zuul deployment)] 2026-04-04 01:33:22.026249 | orchestrator | ok 2026-04-04 01:33:22.037588 | 2026-04-04 01:33:22.037704 | TASK [Set cloud fact (local deployment)] 2026-04-04 01:33:22.071560 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:22.085936 | 2026-04-04 01:33:22.086081 | TASK [Clean the cloud environment] 2026-04-04 01:33:22.778522 | orchestrator | 2026-04-04 01:33:22 - clean up servers 2026-04-04 01:33:23.537820 | orchestrator | 2026-04-04 01:33:23 - testbed-manager 2026-04-04 01:33:23.627182 | orchestrator | 2026-04-04 01:33:23 - testbed-node-2 2026-04-04 01:33:23.709944 | orchestrator | 2026-04-04 01:33:23 - testbed-node-4 2026-04-04 01:33:23.794360 | orchestrator | 2026-04-04 01:33:23 - testbed-node-5 2026-04-04 01:33:23.895611 | orchestrator | 2026-04-04 01:33:23 - testbed-node-0 2026-04-04 01:33:23.990102 | orchestrator | 2026-04-04 01:33:23 - testbed-node-1 2026-04-04 01:33:24.111792 | orchestrator | 2026-04-04 01:33:24 - testbed-node-3 2026-04-04 01:33:24.199836 | orchestrator | 2026-04-04 01:33:24 - clean up keypairs 2026-04-04 01:33:24.217021 | orchestrator | 2026-04-04 01:33:24 - testbed 2026-04-04 01:33:24.237850 | orchestrator | 2026-04-04 01:33:24 - wait for servers to be gone 2026-04-04 01:33:37.160073 | orchestrator | 2026-04-04 01:33:37 - clean up ports 2026-04-04 01:33:37.347640 | orchestrator | 2026-04-04 01:33:37 - 0f44392b-1814-4b4e-ad46-d6bbcc0d799c 2026-04-04 01:33:37.768473 | orchestrator | 2026-04-04 01:33:37 - 2e69247a-4ff0-41e1-be07-a81ad57a4cee 2026-04-04 01:33:38.001971 | orchestrator | 2026-04-04 01:33:38 - 30fc91b9-be6c-4231-907e-39f2b336c168 2026-04-04 01:33:38.217903 | orchestrator | 2026-04-04 01:33:38 - 7db8d041-ebbb-47b0-a7a8-c90ee9e4ccbe 2026-04-04 01:33:38.421966 | orchestrator | 2026-04-04 01:33:38 - a713c87d-1ffb-4496-98a2-a2ed7234cf5a 2026-04-04 01:33:38.620089 | orchestrator | 2026-04-04 01:33:38 - c3a16ef5-be67-44d2-ac9e-53724911c930 2026-04-04 01:33:38.826177 | orchestrator | 2026-04-04 01:33:38 - d4a2d7ef-8a0b-4dbc-95ac-6fa1b5b2544d 2026-04-04 01:33:39.034120 | orchestrator | 2026-04-04 01:33:39 - clean up volumes 2026-04-04 01:33:39.181962 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-4-node-base 2026-04-04 01:33:39.220059 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-5-node-base 2026-04-04 01:33:39.256794 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-0-node-base 2026-04-04 01:33:39.296103 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-1-node-base 2026-04-04 01:33:39.336379 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-manager-base 2026-04-04 01:33:39.378000 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-3-node-base 2026-04-04 01:33:39.417260 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-2-node-base 2026-04-04 01:33:39.459443 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-8-node-5 2026-04-04 01:33:39.500052 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-2-node-5 2026-04-04 01:33:39.541901 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-4-node-4 2026-04-04 01:33:39.585330 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-0-node-3 2026-04-04 01:33:39.624889 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-1-node-4 2026-04-04 01:33:39.663087 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-3-node-3 2026-04-04 01:33:39.708352 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-6-node-3 2026-04-04 01:33:39.747366 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-5-node-5 2026-04-04 01:33:39.789432 | orchestrator | 2026-04-04 01:33:39 - testbed-volume-7-node-4 2026-04-04 01:33:39.836981 | orchestrator | 2026-04-04 01:33:39 - disconnect routers 2026-04-04 01:33:39.962514 | orchestrator | 2026-04-04 01:33:39 - testbed 2026-04-04 01:33:41.038324 | orchestrator | 2026-04-04 01:33:41 - clean up subnets 2026-04-04 01:33:41.089527 | orchestrator | 2026-04-04 01:33:41 - subnet-testbed-management 2026-04-04 01:33:41.249135 | orchestrator | 2026-04-04 01:33:41 - clean up networks 2026-04-04 01:33:41.422785 | orchestrator | 2026-04-04 01:33:41 - net-testbed-management 2026-04-04 01:33:41.705214 | orchestrator | 2026-04-04 01:33:41 - clean up security groups 2026-04-04 01:33:41.747267 | orchestrator | 2026-04-04 01:33:41 - testbed-node 2026-04-04 01:33:41.859146 | orchestrator | 2026-04-04 01:33:41 - testbed-management 2026-04-04 01:33:41.960536 | orchestrator | 2026-04-04 01:33:41 - clean up floating ips 2026-04-04 01:33:41.991303 | orchestrator | 2026-04-04 01:33:41 - 81.163.192.76 2026-04-04 01:33:42.386227 | orchestrator | 2026-04-04 01:33:42 - clean up routers 2026-04-04 01:33:42.488402 | orchestrator | 2026-04-04 01:33:42 - testbed 2026-04-04 01:33:44.141102 | orchestrator | ok: Runtime: 0:00:21.571845 2026-04-04 01:33:44.145378 | 2026-04-04 01:33:44.145531 | PLAY RECAP 2026-04-04 01:33:44.145653 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-04 01:33:44.145714 | 2026-04-04 01:33:44.279706 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-04 01:33:44.280799 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-04 01:33:45.050053 | 2026-04-04 01:33:45.050230 | PLAY [Cleanup play] 2026-04-04 01:33:45.066690 | 2026-04-04 01:33:45.066870 | TASK [Set cloud fact (Zuul deployment)] 2026-04-04 01:33:45.119882 | orchestrator | ok 2026-04-04 01:33:45.128424 | 2026-04-04 01:33:45.128586 | TASK [Set cloud fact (local deployment)] 2026-04-04 01:33:45.163595 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:45.175016 | 2026-04-04 01:33:45.175203 | TASK [Clean the cloud environment] 2026-04-04 01:33:46.384667 | orchestrator | 2026-04-04 01:33:46 - clean up servers 2026-04-04 01:33:46.861999 | orchestrator | 2026-04-04 01:33:46 - clean up keypairs 2026-04-04 01:33:46.876634 | orchestrator | 2026-04-04 01:33:46 - wait for servers to be gone 2026-04-04 01:33:46.917269 | orchestrator | 2026-04-04 01:33:46 - clean up ports 2026-04-04 01:33:46.990347 | orchestrator | 2026-04-04 01:33:46 - clean up volumes 2026-04-04 01:33:47.066314 | orchestrator | 2026-04-04 01:33:47 - disconnect routers 2026-04-04 01:33:47.099070 | orchestrator | 2026-04-04 01:33:47 - clean up subnets 2026-04-04 01:33:47.127190 | orchestrator | 2026-04-04 01:33:47 - clean up networks 2026-04-04 01:33:47.783838 | orchestrator | 2026-04-04 01:33:47 - clean up security groups 2026-04-04 01:33:47.818133 | orchestrator | 2026-04-04 01:33:47 - clean up floating ips 2026-04-04 01:33:47.846475 | orchestrator | 2026-04-04 01:33:47 - clean up routers 2026-04-04 01:33:48.214690 | orchestrator | ok: Runtime: 0:00:01.930270 2026-04-04 01:33:48.219404 | 2026-04-04 01:33:48.219533 | PLAY RECAP 2026-04-04 01:33:48.219621 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-04 01:33:48.219665 | 2026-04-04 01:33:48.354124 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-04 01:33:48.355841 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-04 01:33:49.152050 | 2026-04-04 01:33:49.152217 | PLAY [Base post-fetch] 2026-04-04 01:33:49.168764 | 2026-04-04 01:33:49.168911 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-04 01:33:49.226613 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:49.242141 | 2026-04-04 01:33:49.242393 | TASK [fetch-output : Set log path for single node] 2026-04-04 01:33:49.289049 | orchestrator | ok 2026-04-04 01:33:49.297983 | 2026-04-04 01:33:49.298121 | LOOP [fetch-output : Ensure local output dirs] 2026-04-04 01:33:49.793796 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/logs" 2026-04-04 01:33:50.076241 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/artifacts" 2026-04-04 01:33:50.343462 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5467f274f7104821808ed5960c284cbe/work/docs" 2026-04-04 01:33:50.373014 | 2026-04-04 01:33:50.373194 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-04 01:33:51.293226 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:33:51.293756 | orchestrator | changed: All items complete 2026-04-04 01:33:51.293858 | 2026-04-04 01:33:51.993982 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:33:52.708059 | orchestrator | changed: .d..t...... ./ 2026-04-04 01:33:52.728240 | 2026-04-04 01:33:52.728388 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-04 01:33:52.763312 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:52.765528 | orchestrator | skipping: Conditional result was False 2026-04-04 01:33:52.782116 | 2026-04-04 01:33:52.782229 | PLAY RECAP 2026-04-04 01:33:52.782299 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-04 01:33:52.782337 | 2026-04-04 01:33:52.932549 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-04 01:33:52.935139 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-04 01:33:53.726747 | 2026-04-04 01:33:53.726960 | PLAY [Base post] 2026-04-04 01:33:53.741817 | 2026-04-04 01:33:53.741958 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-04 01:33:54.775122 | orchestrator | changed 2026-04-04 01:33:54.784879 | 2026-04-04 01:33:54.785005 | PLAY RECAP 2026-04-04 01:33:54.785074 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-04 01:33:54.785141 | 2026-04-04 01:33:54.913149 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-04 01:33:54.916708 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-04 01:33:55.740915 | 2026-04-04 01:33:55.741092 | PLAY [Base post-logs] 2026-04-04 01:33:55.751873 | 2026-04-04 01:33:55.752008 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-04 01:33:56.221607 | localhost | changed 2026-04-04 01:33:56.239052 | 2026-04-04 01:33:56.239261 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-04 01:33:56.276388 | localhost | ok 2026-04-04 01:33:56.281819 | 2026-04-04 01:33:56.281976 | TASK [Set zuul-log-path fact] 2026-04-04 01:33:56.298793 | localhost | ok 2026-04-04 01:33:56.308943 | 2026-04-04 01:33:56.309061 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-04 01:33:56.334623 | localhost | ok 2026-04-04 01:33:56.339224 | 2026-04-04 01:33:56.339371 | TASK [upload-logs : Create log directories] 2026-04-04 01:33:56.877197 | localhost | changed 2026-04-04 01:33:56.882202 | 2026-04-04 01:33:56.882402 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-04 01:33:57.407272 | localhost -> localhost | ok: Runtime: 0:00:00.008033 2026-04-04 01:33:57.416577 | 2026-04-04 01:33:57.416778 | TASK [upload-logs : Upload logs to log server] 2026-04-04 01:33:58.015289 | localhost | Output suppressed because no_log was given 2026-04-04 01:33:58.017605 | 2026-04-04 01:33:58.017728 | LOOP [upload-logs : Compress console log and json output] 2026-04-04 01:33:58.075218 | localhost | skipping: Conditional result was False 2026-04-04 01:33:58.081292 | localhost | skipping: Conditional result was False 2026-04-04 01:33:58.090282 | 2026-04-04 01:33:58.090577 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-04 01:33:58.139827 | localhost | skipping: Conditional result was False 2026-04-04 01:33:58.140117 | 2026-04-04 01:33:58.145070 | localhost | skipping: Conditional result was False 2026-04-04 01:33:58.154434 | 2026-04-04 01:33:58.154748 | LOOP [upload-logs : Upload console log and json output]